Skip to main content

Full text of "Effects of symmetrical and asymmetrical sensorineural hearing loss on speech perception in noise"

See other formats


THE EFFECTS OF SYMMETRICAL AND ASYMMETRICAL SENSORINEURAL 
HEARING LOSS ON SPEECH PERCEPTION IN NOISE 



By 
DALE ANTHONY OSTLER 



A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL 

OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT 

OF THE REQUIREMENTS FOR THE DEGREE OF 

DOCTOR OF PHILOSOPHY 

UNIVERSITY OF FLORIDA 

2000 



This work is dedicated, with love and admiration, to my wife, Julie, and our 3 children, 
Benjamin, Kara, and Nathan, all of whom give my life such rich meaning. 









ACKNOWLEDGMENTS 

A project such as this, though carried out as "independent research," does not 
happen without the substantial input and expert assistance from many individuals. 
Principal among these are two who deserve particular mention: my wife, Julie, and my 
committee chair, Dr. Carl C. Crandell. One could not ask for a more kind and 
compassionate dissertation chair. Dr. Crandell not only taught me how to conduct and 
report research but he became a true mentor, colleague, and friend. He is the one who 
kept me on track academically and psychologically. He provided me with direction, 
focus, and motivation just when I needed it the most. 

My wife and my children, Benjamin, Kara, and Nathan were my anchors 
throughout this ordeal. My wife is the one who put in the endless hours at home while I 
was in the lab pushing buttons or re-writing this section or that. Her patience, endurance, 
and words of encouragement are truly admirable. My children kept me sane. It is 
difficult to remain down when you return home after a day of set-backs and hurdles in the 
lab to be greeted by the toothless grin of an 8-month old, the exuberant hugs of a 4-year 
old, and the excited requests of a 6-year old to "come out and play." Special 
acknowledgement is also extended to my mother, Nellie Ostler, for her timely and able 
assistance to my family during the long hours of dissertation finalization. 

No dissertation is complete without a committee. I do not think I could have been 
blessed with a better, more helpful committee. I received unique insights into this project 
and my studies at the University of Florida from each one: Dr. Ken Gerhardt. Dr. Scott 

iii 



Griffiths, Dr. Alan Hutson, and Dr. (Colonel) Nancy Vause. I thank each for willingly 
sharing their knowledge, their time, their patience, and especially their encouragement. I 
would also like to thank Dr. W. S. Brown, Chair of the Department of Communication 
Sciences and Disorders, for his encouraging words and his amiable ways. Special 
mention must go to Dr. Brown and Colonel Vause for going the extra mile in helping 
secure funding for this project. 

Acknowledgement and thanks are also extended to the faculty, staff and students 
in the Department of Communication Sciences and Disorders, the University of Florida 
Speech and Hearing Clinic, and the Institute for Advanced Study of the Communication 
Process for their assistance with many facets of this project. Additionally, heartfelt 
gratitude is expressed to the professionals and staff at the Gainesville (FL) Veterans 
Affairs Audiology Clinic for their time and assistance in finding subjects for this project. 
I must also thank the more than 90 individuals (and many spouses) who, without notion 
of compensation, came to the Hearing Research Lab (some making a six-hour roundtrip) 
and endured the test conditions as subjects. 

Thanks are also given to those individuals in the U.S. Army who made the pursuit 
of a Ph.D. possible, many of whom are unknown and, therefore, unnamed. Special 
gratitude is expressed to the leaders and cohorts in Army Audiology from 1988 to the 
present who mentored me and sparked in me the notion of pursuing higher ground and 
the belief that I could attain it. 

Last, and most important, acknowledgment and thanks are rendered to a power 
greater than man's: for His sustaining guidance, for instilling belief in self when self- 
belief faltered, and for always being there. 



iv 



TABLE OF CONTENTS 

page 



ACKNOWLEDGEMENTS m 

LIST OF TABLES vii 

LIST OF FIGURES viii 

ABSTRACT ix 

CHAPTERS 

1 INTRODUCTION 1 

2 REVIEW OF LITERATURE 6 

Binaural Hearing 6 

Binaural Summation 6 

Head Shadow Effect 10 

Localization 13 

Binaural Squelch 18 

Binaural Release From Masking 21 

Summary of Binaural Advantages 24 

Other Binaural Advantages 24 

Unilateral Hearing Loss and Speech Perception 25 

Asymmetrical Hearing Loss and Speech Perception 25 

3 MATERIALS AND METHODS 30 

Materials 30 

Subjects 30 

Test Stimuli 52 

Competing Stimuli 54 

Experimental Conditions 55 

Speaker-Listener Orientation 55 

Experimental Procedures 58 

Statistical Analysis 62 



4 RESULTS 63 

Demographic Profile "3 

Hearing Thresholds 66 

Speech Reception Threshold (SRT) 70 

Summary of Results 79 

5 DISCUSSION 81 

Group by Condition Comparisons 82 

Better-Ear vs. Poorer-Ear Symmetry Group 82 

Better-Ear/Poorer-ear vs. Midline Symmetry Group 82 

Better-Ear vs. Poorer-Ear Asymmetry Group 86 

Better-Ear vs. Midline Asymmetry Group 87 

Poorer-Ear vs. Midline Asymmetry Group 88 

Better-Ear Symmetry Group vs. Better-Ear Asymmetry Group 89 

Midline Symmetry Group vs. Midline Asymmetry Group 89 

Poorer-Ear Symmetry Group vs. Poorer-Ear Asymmetry Group 90 

Summary 91 

Clinical/Military Implications 91 

Limitations of the Research 93 

Future Directions 94 

APPENDICES 

A INFORMED CONSENT FORM 97 

B HINT SENTENCE LISTS 99 

C SRT RAW DATA 103 

LIST OF REFERENCES Ill 

BIOGRAPHICAL SKETCH 119 



VI 



LIST OF TABLES 

Table eage 

3 - 1 . Hearing Thresholds for Symmetrical Subjects and Asymmetrical Subjects 49 

3-2. Demographic Data for the Symmetrical Subjects 50 

3-3. Demographic Data for the Asymmetrical Subjects 51 

4- 1 . Summary of Demographic Data for Symmetrical Group and Asymmetrical Group ... 64 

4-2. Summary of Hearing Thresholds (dB HL) by Frequency for Symmetrical Subjects 

and Asymmetrical Subjects 67 

4-3 . Mean PT Aiooo-sooohz (dB HL), Standard Deviation, and Degree of Asymmetry for 
Better Ear and Poorer Ear and Difference in Mean PTAiooo-8ooohz Within and 
Between Groups 69 

4-4. Mean Speech Reception Thresholds (SRT) for Symmetrical Subjects 71 

4-5 . Mean Speech Reception Thresholds (SRT) for Asymmetrical Subjects 72 

4-6. Repeated Measures Analysis of Variance 77 






vu 



LIST OF FIGURES 

Figure Eage 

3-1. Individual Audiograms for Symmetrical Subjects 31 

3-2. Individual Audiograms for Asymmetrical Subjects 40 

3-3. Average Audiogram for Symmetrcial Subject Group and Asymmetrical Subject 

Group 48 

3-4. Subject Selection 53 

3-5. The Three Experimental Conditions in the Sound Booth 56 

3-6. Diagram of Equipment Set-Up 60 

4- 1 . Average Audiogram for Symmetrical Subjects and Asymmetrical Subjects 68 

4-2. Mean SRT Data from the Better Ear Condition 73 

4-3. Mean SRT Data from the Midline Condition 74 

4-4. Mean SRT Data from the Poorer Ear Condition 75 

4-5. Mean SRT Data from Each Experimental Condition 76 















vni 



Abstract of Dissertation Presented to the Graduate School 
of the University of Florida in Partial Fulfillment of the 
Requirements for the Degree of Doctor of Philosophy 

THE EFFECTS OF SYMMETRICAL AND ASYMMETRICAL SENSORINEURAL 
HEARING LOSS ON SPEECH PERCEPTION IN NOISE 

By 

Dale Anthony Ostler 

August 2000 

Chairman: Carl C. Crandell, Ph.D. 

Major Department: Communication Sciences and Disorders 

The purpose of this study was to determine the effects of symmetrical and 
asymmetrical sensorineural hearing loss (SNHL) on speech perception in noise. Subjects 
consisted of 16 listeners with symmetrical SNHL and 16 listeners with asymmetrical 
SNHL. The speech reception threshold (SRT) was assessed with Hearing In Noise 
(HINT) sentences via an adaptive testing procedure. Competing noise was a speech- 
spectrum shaped noise. Speech and noise were presented in sound field from spatially 
separated loudspeakers at various azimuths (speech at 07noise at 180°, speech at 
457noise at 225°, speech at 3157noise at 135°). Results indicated that individuals with 
asymmetrical SNHL, defined as a 15-dB or greater difference in hearing thresholds 
between ears at two or more frequencies, obtained SRTs equivalent to those for 
individuals with symmetrical SNHL when the speech was presented to the better ear (BE) 
(speech at +/- 45°). Symmetrical and asymmetrical midline (MID) (speech and noise at 
0° and 180°, respectively) SRTs were also equivalent. Asymmetrical SRTs for the poorer 



IX 



ear (PE) were significantly poorer than the symmetrical PE SRTs. Additionally, for the 
symmetrical group, MID SRTs were significantly poorer than were BE or PE SRTs. 
Symmetrical BE and PE SRTs were not different from each other. For the asymmetrical 
group, the PE SRT was significantly poorer than the BE SRT, but was significantly better 
than the MID SRT. The BE SRT was significantly better than the MID SRT for the 
asymmetrical group. Results suggest that individuals with mild to moderately-sloping 
asymmetrical SNHL have speech-perception ability equivalent to that of individuals with 
an equal amount of symmetrical SNHL as long as the speech signal is presented to the 
BE or from directly in front of the listener. Individuals with asymmetrical SNHL, 
however, exhibited more difficulty understanding speech in noise than did individuals 
with symmetrical SNHL if the speech signal was presented toward the PE. 
Clinical/military implications are discussed as well as limitations and future directions of 
research. 



CHAPTER 1 
INTRODUCTION 

It is well recognized that there is a superiority of binaural speech perception over 
monaural speech perception, particularly in noisy or reverberant listening environments 
(e.g., Byrne, 1980; Bess and Tharpe, 1986; Bess, Tharpe, and Gibler, 1986; Bronkhorst 
and Plomp, 1988, 1989; Crandell, 1991; Gelfand and Hochberg, 1976; Harris, 1965; 
Hirsh, 1948a; Keys, 1947; Koenig, 1950; Konkle and Schwartz, 1981; Levitt and 
Rabiner, 1967a, b; Licklider, 1948; MacKeith and Coles, 1971; Markides, 1977; Moncur 
and Dirks, 1967; Moore, 1996; Nabelek and Mason, 1981; Plomp, 1986). For example, 
Bess, Tharpe, and Gibler (1986) reported a binaural advantage of approximately 47% in 
commonly reported background noise levels for 25 subjects with unilateral hearing 
impairment compared with 25 subjects with normal hearing. Harris (1965) showed that 
speech perception in noise improved by up to 34% when 24 subjects with normal hearing 
were allowed to use both ears as opposed to one ear. Moncur and Dirks (1967) also 
demonstrated that binaural speech perception was superior to monaural speech 
perception. Specifically, results indicated a 37% improvement in speech-perception 
ability for binaural over monaural listening (one ear masked) for 48 subjects with normal 
hearing in a reverberant environment (reverberation time (RT) = 0.9 seconds). 

The advantage of hearing with two ears for speech perception appears to be due to 
a number of phenomena. These phenomena include binaural summation, head shadow 
effects, localization, binaural squelch, and binaural release from masking. Binaural 
summation refers to the greater sensitivity to sound (in the absence of competing noise) 

1 



by hearing with two ears instead of one ear (e.g., Breakey and Davis, 1949; Carhart, 
1967; Keys, 1947). Without binaural summation, a listener would experience a reduction 
of approximately 3 to 6 dB in speech-perception ability (e.g., Keys, 1947; Lochner and 
Berger, 1961; Shaw, Newman, and Hirsh, 1947). 

The head shadow effect is the reduction in signal intensity as sound moves from 
one side of the head to the other (e.g., Bess and Tharpe, 1986; Sivian and White, 1933; 
Tillman, Kasten, and Horner, 1963). The head shadow effect can result in a 3 to 6 dB 
reduction in the intensity of the signal at the ear contralateral from the origin of the 
acoustic signal (e.g., Carhart, 1965b; Olsen and Carhart, 1967; Tillman et al., 1963). For 
an individual with unilateral hearing impairment, the most favorable listening situation in 
noise (speech in the good ear and noise in the poor ear) would result in an SNR of +6.4 
dB. However, if the speech and noise sources were reversed (noise in the good ear and 
speech in the bad ear), a doubling of the head shadow effect could occur resulting in a 13 
dB SNR reduction over the more favorable listening situation due to the operation of the 
head shadow effect (Bess and Tharpe, 1986; Carhart, 1967; Tillman et al., 1963). In 
other words, the individual with unilateral hearing impairment in a listening situation 
where noise was presented to the good ear and speech to the bad ear would be at a 13 dB 
SNR disadvantage over an individual with similar hearing impairment where speech was 
presented to the good ear and noise to the bad ear. 

Localization deals with identifying the source of sound in auditory space in the 
horizontal and/or vertical planes (e.g., Agnew, 2000; Bergman, 1957; Bess and Tharpe, 
1986; Hirsh, 1950; Moore, 1998; Stevens and Newman, 1936). A loss of hearing 
sensitivity can impair the ear's ability to detect changes in the angular position of a sound 



source relative to the listener's head, a phenomenon known as the minimum audible 
angle (MAA) (Bess and Tharpe, 1986; Durlach, Thompson, and Colburn, 1981; Humes, 
Allen, and Bess, 1983; Nordlund, 1964; Viehweg and Campbell, 1960). In particular, 
many studies have determined that individuals with unilateral and/or asymmetrical 
hearing loss have significantly larger MAAs than individuals with normal hearing (e.g., 
Bess and Tharpe, 1986, Durlach, et al., 1981; Hirsh, 1950; Humes et al., 1983; Nordlund, 
1964). Additionally, studies have suggested that poor localization ability in noise is 
associated with poor speech-perception ability in noise (e.g., Bess, et al., 1986; Hirsh, 
1948b; Hirsh, 1950; Licklider, 1948). 

Binaural squelch, or "The Cocktail Party Effect," refers to the ability of the 
auditory system to suppress the deleterious effects of noise and/or reverberation, thus 
enhancing the perception of the target signal (e.g., Cherry, 1953; Koenig, 1950; Licklider, 
1948). Previous studies have shown that a unilateral hearing loss can result in a 30% to 
36% loss of speech-perception ability in noise (e.g., Carhart, 1965a; Gelfand and 
Hochberg, 1976; Harris, 1965; Licklider, 1948; MacKeith and Coles, 1971; Moncur and 
Dirks, 1967; Olsen and Carhart 1967). 

Binaural release from masking refers to improvement in the ability to detect target 
signals in a background of noise by use of interaural amplitude and phase differences 
(e.g., Agnew, 2000; Bronkhorst and Plomp, 1988, 1989; Byrne, 1980; Licklider, 1948; 
Moore, 1996, 1998). While this phenomenon is primarily a laboratory event, a loss of 
these interaural clues from hearing impairment can result in a 4- to 7-dB decrease in 
speech-perception ability (Carhart et al., 1967; Kaiser and David, 1960; Levitt and 
Rabiner, 1967a, b). 



Despite the plethora of investigations that have examined speech perception in 
binaural and monaural listening conditions, there remain limited data concerning the 
effects of asymmetrical sensorineural hearing loss (SNHL) on binaural speech-perception 
ability. This lack of information is unfortunate as past investigators have suggested that 
individuals with asymmetrical SNHL may lose some, or all, of the binaural advantage 
(Byrne, 1980; Bess and Tharpe, 1986; Keys, 1947; Nabelek and Mason, 1981; Pollack, 
1948; Stream and Dirks, 1974). For example, Keys (1947) found that the binaural 
advantage appears to be reduced for individuals with unequal amounts of hearing loss in 
the two ears. Specifically, results indicated that when individuals with hearing 
impairment received a speech signal of unequal loudness between the two ears they had 
speech-perception scores no better than when the speech signal was presented to one ear 
alone. It is reasonable to suggest that speech perception in noise will be compromised if 
the binaural advantage is reduced or eliminated in the case of asymmetrical hearing 
impairment. To support this assumption, Nabelek and Mason (1981) demonstrated that 
subjects with asymmetrical SNHL exhibited greater susceptibility to reverberation and 
noise than did subjects with symmetrical SNHL. Specifically, four subjects with 
asymmetrical hearing impairment demonstrated almost 10% poorer speech-perception 
performance under varying conditions of reverberation and noise than did 1 1 subjects 
with symmetrical hearing impairment. Clearly, the relevance of determining the effects 
of asymmetrical hearing impairment on speech perception is paramount for individuals in 
industry and the military where detecting and understanding acoustical warning signals 
and voice commands may mean the difference between life and death or serious injury. 
Additionally, there are clinical implications in counseling, aural rehabilitation, and fitting 



of hearing aids or other assistive listening devices for those individuals who suffer from 
asymmetrical hearing impairment. 

With these considerations in mind, the purpose of this study was to examine the 
effects of asymmetrical hearing impairment on speech-perception ability in noise. 
Specifically, 32 adult listeners, 16 with symmetrical SNHL and 16 with asymmetrical 
SNHL served as subjects. An adaptive speech-perception procedure was used to assess a 
speech reception threshold (SRT) in noise. Sentences from the Hearing in Noise Test 
(HINT) (Nilsson, Soli, and Sullivan, 1994) were used as the speech stimuli. Speech- 
spectrum shaped noise was used as the competing stimuli. Three speaker/listener 
orientations were used for all subjects. These conditions were: (1) speech at 0° and noise 
at 180°, (2) speech at 45° and noise at 225°, and (3) speech at 3 15° and noise at 135°. 









CHAPTER 2 
REVIEW OF LITERATURE 

Binaural Hearing 

In order to more fully understand the effects of symmetrical and asymmetrical 

hearing impairment on speech-perception ability, a review of binaural hearing is 

provided. The advantages of binaural hearing can be separated into five main areas: 

(1) binaural summation, (2) head shadow effect, (3) localization, (4) binaural squelch, 

and (5) binaural release from masking. Each of these areas is discussed below. 

Binaural Summation 

Binaural summation is the phenomenon of exhibiting greater sensitivity to sound 

when using two ears, as opposed to one (Bocca, 1955; Breakey and Davis, 1949; Byrne, 

1980; DeCroix and Dehaussey, 1964; Dermody and Byrne, 1975; Hirsh, 1948b, 1950a; 

Keys, 1947; Licklider, 1948; Lochner and Burger, 1961; Markides, 1977; Nabelek and 

Pickett, 1974b; Pollack, 1948; Pollack and Picket, 1958; Reynolds and Stevens, 1960; 

Shaw, Newman, and Hirsh, 1947). Specifically, binaural summation causes the threshold 

for sound measured binaurally in quiet to be lower (better) by 3 to 6 dB than the 

threshold measured for either ear alone. 

Keys (1947) was among the first to demonstrate the impact of binaural summation 

on speech-perception. Using a group of 47 subjects with hearing impairment, he 

presented sentential stimuli both monaurally and binaurally under earphones in quiet 

using two different methods. In Method I, the stimuli were presented to the two ears at 

the same dB HL (Hearing Level) in a Y-cord arrangement as if a single hearing aid were 



serving both ears. In Method II, the stimuli were presented to the two ears at equal dB 
sensation level (SL) as if each ear had its own hearing aid adjusted to the hearing 
sensitivity of its respective ear. Keys found that the average binaural threshold in 
Method II (ears equated for HL) had a significantly superior threshold (i.e. lower 
threshold) over the average monaural threshold by 4. 1 dB. That is, when the speech level 
was equated for hearing ability at both ears, the binaural threshold showed a statistically 
significant improvement over the monaural threshold. However, in Method I (ears not 
equated for HL), even though the binaural threshold was better by 1.5 dB, there was no 
statistical difference from the monaural threshold. 

Shaw, Newman and Hirsh (1947) also studied binaural summation. These authors 
presented spondaic words to five subjects with normal hearing in quiet via earphones in 
both monaural and binaural presentations. The presentation levels in the binaural 
condition were equal for the two ears. That is, the intensity of the stimulus presentation 
was set at equal SLs between the ears. The authors found that the binaural condition had 
an average threshold 3.6 dB lower than in the monaural condition. These authors 
concluded that when both ears are equated there is a binaural summation effect that yields 
better thresholds than either ear could obtain independently. 

Breakey and Davis (1949) tested two groups of 10 subjects in order to study 
binaural summation. Group I was composed of subjects with normal hearing and Group 
II was composed of subjects with symmetrical hearing impairment (four had conductive 
hearing loss, four had sensorineural hearing loss, and two had mixed hearing loss). All 
testing was conducted in quiet under earphones in monaural and binaural conditions. 
Stimuli consisted of both spondaic words and sentences presented at equal SL to both 



ears during the binaural presentation. Results showed a statistically significant binaural 
improvement in speech threshold (2.6 dB for Group I and 3.3 dB for Group II). These 
findings support the concept of binaural summation particularly when the two ears are 
equated for sensitivity. As well, these findings suggest that individuals with symmetrical 
hearing impairment also obtain binaural summation similar to that obtained by 
individuals with normal hearing. 

Lochner and Burger (1961) evaluated binaural summation on speech signals by 
measuring the SLs needed to achieve equal speech-perception scores for monaural and 
binaural presentations. Eight subjects (hearing status not reported) were presented with 
monosyllabic words under earphones in quiet. The stimuli were presented at a range of 
SLs from about 5 dB to 50 dB relative to the binaural threshold. The monaural speech- 
discrimination score reported was the average of the two monaural speech-discrimination 
scores for each subject. These researchers determined that in order to obtain equal 
speech-discrimination in both the monaural and the binaural conditions, the SL of the 
signal in the monaural presentation had to be raised by 3 dB. 

In an effort to determine binaural summation, Carhart (1967) measured the 
speech-perception of a group of 32 normal hearing subjects. Presentations of 
monosyllabic words were made monaurally and binaurally at a wide range of SLs (re: the 
monaural SRT). Method of delivery (sound-field or earphone) was not specified. 
Results showed that when the speech-perception scores were equal between the binaural 
and monaural modes, the binaural presentation level was consistently 2.7 dB lower than 
the monaural presentation level. In other words, to obtain the same speech-perception 
score monaurally as binaurally, the intensity of the signal needed to be raised 2.7 dB. 



These research findings have consistently shown that when one uses both ears, as 
opposed to one ear, the threshold for hearing speech improves by as much as 3 to 4 dB. 
Moreover, this phenomenon was consistent for both listeners with symmetrical hearing 
impairment and listeners with normal hearing. A 3- to 4-dB gain in hearing threshold 
may not seem like much of an advantage for a person with normal hearing who receives 
speech at suprathreshold levels. However, a 3-dB gain can be the difference between 
understanding and not understanding spoken communication for someone with a hearing 
loss who often hears speech at or near threshold levels (Bess and Tharpe, 1986; Carhart, 
1967; Markides, 1977; Plomp, 1986). This point is further illustrated by noting that the 
slope of the linear portion of the performance intensity function (PIF) for monosyllabic 
words is 6% per dB (Bess and Tharpe, 1986; Carhart, 1967; Konkle and Schwartz, 1981; 
Markides, 1977; Studebaker, 1991). Therefore, a 3-dB advantage for binaural hearing 
results in at least an 18% binaural speech-perception improvement over the monaural 
condition. Furthermore, because the PIF for sentential material is steeper than it is for 
words, the resulting improvement in binaural speech perception over monaural speech 
perception can be substantially greater (Bess and Tharpe, 1986; Konkle and Schwartz, 
1981). 

In addition to binaural summation at threshold, several researchers demonstrated 
an even greater binaural advantage at suprathreshold levels (Byrne, 1980; DeCroix, and 
Dehaussey, 1964; Dermody and Byrne, 1975; Fletcher and Munson, 1933; Hirsh, 1948b, 
1950a; Hirsh and Pollack, 1948; Licklider, 1948; Pollack and Picket, 1958; Reynolds and 
Stevens, 1960; Scharf, 1968). Researchers consistently determined that the 
suprathreshold binaural summation of loudness is on the order of 6 to 10 dB, meaning 



10 

that at suprathreshold levels, the same intensity stimuli is perceived to be as much as 6 to 
10 dB louder when using two ears than when using one ear. 

For example, Hirsh and Pollack (1948) performed a loudness-matching exercise 
on eight subjects with normal hearing. In quiet and under earphones, the subjects were 
presented monaurally with a 250-Hz tone at 80 dB SPL and they were required to match 
its loudness with the same tone presented binaurally. The results showed that the 
binaural loudness was consistently judged at 74 dB SPL (a score 6 dB lower (less 
intense) than the monaural presentation. In other words, a suprathreshold tone presented 
monaurally was judged to be 6 dB louder if it were presented binaurally. 

Reynolds and Stevens (1960), in a similar monaural-binaural loudness-matching 
procedure, required their subjects to adjust a binaural low frequency noise (100-500 Hz) 
to be equal in loudness to the same noise presented monaurally and then to adjust a 
monaural noise to be equal in loudness to the same noise presented binaurally. All 
stimuli were presented at several SPLs. Results showed a binaural summation effect that 
increased with intensity level. For example, a tone presented to one ear at about 57 dB 
SPL would be matched binaurally at about 50 dB SPL: a 7-dB difference. However, the 
same tone presented monaurally at 90 dB SPL would be matched binaurally at 80 dB 
SPL— a 10-dB difference. 

In summary, researchers have established a binaural summation advantage at 
threshold and at suprathreshold levels. The advantage at threshold is 3 to 6 dB while the 
advantage at suprathreshold is up to 10 dB. 
Head Shadow Effect 

The acoustical shadow of an auditory signal in the sound field cast by the head 
renders the sound less intense at the ear on the opposite side of the head and is known as 



11 

the head shadow effect (Sivian and White, 1933). The head shadow effect for speech 
stimuli is often measured in sound field by comparing the speech-perception score 
between near-ear (NE) and far-ear (FE) presentations (Carhart, 1965a; Markides, 1977; 
Nabelek and Pickett, 1974a,b; Olsen and Carhart, 1967; Tillman et al., 1963). Near-ear 
measurements (also called monaural-direct) are obtained with the primary signal 
presented on the side of the head with the better ear. Far-ear measurements (also called 
monaural-indirect) are obtained with the primary signal presented on the side of the head 
with the poorer ear. 

Tillman et al. (1963) used 24 subjects with normal hearing situated equidistant 
between two loudspeakers located at angles of 45° azimuth. Each subject had one ear 
occluded by an earmuff Each subject's SRT was then measured in quiet by presenting 
spondee words from each speaker independently. This resulted in monaural-direct and 
monaural-indirect presentations. Results showed that the NE SRT was 6.4 dB better 
(lower) than was the FE SRT. That is, the signal for spondaic words suffered a 6.4-dB 
loss in intensity as it traveled from one side of the head to the other. 

Nordlund and Fritzell (1963) also investigated the head shadow effect on 10 
subjects with normal hearing. Near-ear and far-ear conditions were simulated by 
recording the stimuli (monosyllabic words) from various azimuths via the two ears on an 
artificial head that was rotated to obtain the angles of 0°, 30°, 60°, 90°, 135°, and 180°. 
Speech-perception scores were then obtained by playing these recordings to the subjects 
under earphones. All scores were obtained in the absence of any competing noise. The 
authors reported higher speech-perception scores for the near ear recordings of 1 7%, 
33%, 30%, and 23% for the azimuths 30°, 60°, 90°, and 135° respectively. This yielded 



12 

an average 25.8% head shadow advantage across orientation for NE listening. Stated 
otherwise, the head shadow effect created an approximately 26% deficit in speech- 
perception when the good ear was positioned unfavorably away from the speaker. 

Olsen (1965) confirmed the head shadow effect on 18 subjects with normal 
hearing. Listeners were tested in sound field with the speakers arranged at azimuths of 
45°. A hearing loss was induced in one ear via an ear muff and noise masking. 
Monosyllabic words were presented in a monaural-direct and a monaural-indirect 
paradigm. Results showed a 5- to 7-dB head shadow effect. Using a performance 
intensity function slope of 6% per dB, these results suggest an approximately 30 to 42% 
decrease in speech-perception due to the head shadow effect. 

Olsen and Carhart (1967) studied head shadow effect by measuring sound field 
speech-perception scores on a group of 24 subjects with normal hearing. Perception 
scores were obtained in the monaural-direct and monaural-indirect conditions. The 
monaural-indirect condition was created by introducing white noise to the ear through an 
insert receiver and by covering the ear with an ear muff. Phonetically balanced (PB) 
monosyllabic words were used as the test items. Competing noise was produced by a 
second talker. All PB words and competing message were each delivered simultaneously 
from one of two loudspeakers arranged equidistant in front of and at opposite 45° angles 
from the midline of the subject's head. The competing noise was set at a level 6 dB 
weaker than it was for PB materials. The authors reported a 34% reduction in the speech- 
perception ability in the monaural-indirect condition. 

Additionally, it has been demonstrated that the effects of head shadow can create 
as much as a 13-dB SNR deficit for the individual with a unilateral hearing impairment 



13 

(Carhart, 1965a; Olsen and Carhart, 1967; Tillman et al., 1963). Specifically, Tillman et 
al., (1963 ) demonstrated that for the individual with unilateral hearing, if speech were 
presented to the good ear and noise were presented to the impaired ear (at equivalent 
intensities) a +6.4 dB SNR would be created at the good ear (in which case, speech 
perception would likely be minimally affected). However, if the situation were reversed 
with noise presented to the impaired ear and speech to the good ear, a -6.4 dB SNR 
would be created at the good ear. This amounts to an overall 13-dB drop in SNR for the 
individual in a noisy listening environment. 

In summary, head shadow is the effect of reducing the intensity of an acoustic 
signal as it travels from one side of the head to the other. Binaural hearing allows the 
individual to minimize the deleterious effects of head shadow because one of the two ears 
can generally be positioned to the side of the speech signal while blocking the noise with 
the head (Bess and Tharpe, 1986; Konkle and Schwartz, 1981). This advantage is up to 6 
dB in quiet and up to 13 dB in noise. 

Localization 

Localization is the ability to identify the direction of a sound source in auditory 
space (Agnew, 2000; Bergman, 1957; Bess and Tharpe, 1986). The primary 
characteristics of an auditory signal that contribute to the ability to localize sounds are (1) 
the interaural time difference (ITD), (2) the interaural intensity difference (IID), and (3) 
spectral cues to the pinna (Bess and Tharpe, 1986; Halverson, 1927; Sivian and White, 
1933; Stevens and Newman, 1936; Yost and Nielsen, 1985). Interaural time difference 
refers to difference in arrival time of a signal at the two ears. Interaural intensity 
difference refers to the difference in stimulus intensity of a signal at the two ears. The 
pinna aids in localization by tunneling the spectral cues for the signal (mainly the high 



14 

frequency components) to the external auditory meatus and into the ear canal. Interaural 
time difference and IID are caused by the distance between the ears and the acoustical 
shadow of the head, respectively (Yost and Nielsen, 1985). For example, an auditory 
signal originating from the right side of the head will arrive earlier and be more intense at 
the right ear than at the left ear. These ITD and IID differences are detected by the two 
ears and aid in localization. 

These interaural characteristics are frequency dependent for pure-tone signals 
(Halverson, 1927; Mills, 1960; Sandel, Teas, Feddersen, and Jeffress, 1955; Sivian and 
White, 1933; Stevens and Newman, 1936). Specifically, localization of low frequency 
signals (below 1500 Hz) is primarily determined by phase (ITD) characteristics while 
localization of high frequency signals (above 1500 Hz) is dominated by intensity (IID) 
characteristics. However, for localization of speech signals both ITD and IID are needed 
(Henning, 1974; Mills, 1960; Sandel et al., 1955). 

These interaural characteristics and their effect on localization can be explained 
readily by a brief review of the physics of sound. High-frequency signals have shorter 
wavelengths than do low-frequency signals, and high-frequency signals are attenuated by 
objects in their path of travel (such as the head) whose size is equal to or greater than the 
wavelength of the sound. Therefore, signals above about 1500 Hz will have wavelengths 
shorter than the diameter of the head and will have their intensity reduced as the signal 
travels from one side of the head to the other. Conversely, the intensity of low-frequency 
signals (below about 1500 Hz) are relatively unaffected by the head shadow because their 
wavelengths are longer than the size of the head. However, because of the physical 
distance separating the two ears, the longer wavelength will arrive at the two ears at 



15 



different phases. The difference in phase of the signal at the two ears is detected by the 
neural processing in the cochlea (Pickles, 1988). In brief, low characteristic-frequency 
neurons, innervated via the apical end of the cochlea, respond best to temporal 
differences (i.e. phase differences) in the sound signal. 

Stevens and Newman (1936) measured the effect of phase and intensity 
differences on the localization ability of two subjects (hearing acuity not reported) in a 
free-field. Subjects were placed in a chair on a high platform on a rooftop to avoid 
reflecting surfaces. The stimuli (pure-tones from 200 to 10,000 Hz) were presented from 
a loudspeaker at various azimuths from 0° to 180°. The loudspeaker was mounted on a 
rotating boom 12 feet in front of and on a plane horizontal to the subject's head. 
Localization ability was measured by noting the subject's localization error rate by 
frequency. Localization error rate was simply the percentage of times that the location of 
a stimulus was identified incorrectly. Results showed that the error of localization was 
low for signals below 2000 Hz and above 4000 Hz and that the percent of reversals was 
high for signals below 2000 Hz and low for signals above 4000 Hz. These results were 
compared against the theoretical change in localization that would be expected from a 
phase-based computation of localization (Halverson, 1927). This phase-based 
computation suggested that localization of low-frequency sounds was phase-dominated. 
Therefore, errors of localization of low-frequency sounds would be low. The authors 
concluded that these results are consistent with earlier work (Halverson, 1927) and 
confirmed that phase differences were primarily responsible for localization below 2000 
Hz, while intensity differences were primarily responsible for localization above 4000 Hz 



16 

and that an interplay between phase and intensity were jointly responsible for localization 
between 2000 and 4000 Hz. 

Sandel et al. (1955) determined the acoustical characteristic responsible for 
localization ability by comparing actual sound field localization against a mathematically 
derived prediction of localization for pure-tone signals of various frequencies and phase 
relationships. The predicted localization was a mathematical computation that accounted 
for interaural time differences and that calculated the phase angle. Specifically, the 
formula predicted that a low-frequency signal (1500 Hz and below), emitted in-phase 
from two loudspeakers located at 0° and 40° azimuth would be heard as if coming from in 
between the actual sources (20° azimuth) while a high-frequency signal, above 1500 Hz, 
emitted in-phase from the same loudspeakers would be heard as if coming from midline 
(0° azimuth). Conversely, if the paired signals were emitted 180° out-of-phase, the low- 
frequency signal would be heard as if coming from the opposite side of the head (320° 
azimuth) while the high-frequency signal would be heard as if coming from midline (0° 
azimuth). This formula predicted that ITD was responsible for localization. To test their 
hypothesis, these authors placed loudspeakers at 40°, 0°, and 320° azimuth in front of the 
subject in an anechoic chamber. Subjects consisted of four individuals with normal 
hearing and one individual with a slight hearing loss in the left ear at 3000 Hz. Pure- 
tones were emitted in pairs from speakers arranged at 0° to 40° and 320° to 0° azimuths. 
In the first trial, the tones were presented in-phase while in the second trial the tones were 
presented antiphasic (180° out-of-phase). The localization ability of the five subjects in 
repeated trials showed a high degree of consistency between the predicted and actual 
localization. These results confirmed the authors' hypothesis that localization ability for 



17 

low frequencies (below 1500 Hz) is determined by ITDs between the two ears while 
localization ability of the higher frequencies (above 1500 Hz) is not similarly affected. 

Bess et al. (1986) demonstrated that individuals with unilateral hearing loss are at 
a significant disadvantage in localizing sound as compared to individuals with normal 
hearing. These authors performed localization studies on 20 children with varying 
degrees of unilateral SNHL and on 20 children with normal hearing. Subjects were 
seated in an anechoic chamber facing thirteen loudspeakers arrayed from 90° to 270° in 
15° intervals. The subjects were asked to identify which speaker was emitting a 500- or a 
3000-Hz tone. Localization performance was assessed by measuring the number of 
errors made in localizing the sound source. An error index of 0.0 indicated perfect 
localization while an error index of 1.0 showed complete inability to localize. Results 
showed a significant localization advantage for the normal hearing group (error index of 
0.05 at 500 Hz and 0. 17 at 3000 Hz) over the unilaterally hearing impaired group (error 
index of 0.69 at 500 Hz and 0.78 at 3000 Hz). Additionally, both the normal hearing 
group and unilaterally hearing impaired group showed greater difficulty localizing the 
3000-Hz tone than the 500-Hz tone. Furthermore, for the hearing-impaired group, the 
authors determined that the greater the degree of loss, the greater the difficulty was in 
localizing and that poor speech-perception ability in noise was associated with poor 
localization ability in noise. 

It is clearly demonstrated in the foregoing articles that one of the advantages of 
binaural hearing is the ability to localize sounds and that the greater the hearing 
impairment, the greater will be the difficulty in localizing. The characteristics 



18 

responsible for the ability to localize are the ITD and IID imposed by the head on the 
acoustical signal. 
Binaural Squelch 

Binaural squelch is the ability to filter out distracting interference from noise 
and/or reverberation and to focus on the desired signal (Cherry, 1953; Koenig 1950). The 
greatest degree of binaural squelch appears to occur when an individual has equal hearing 
in both ears. Squelch has been referred to as "The Cocktail Party Effect" (Bess and 
Tharpe, 1986; Carhart, 1965a; Cherry, 1953; Kupyer, 1972; Libby, 1980) and is related to 
the phenomenon of binaural release from masking (Agnew, 2000; Bess and Tharpe, 
1986; Konkle and Schwartz, 1981) (discussed later). Binaural squelch traditionally is 
measured by comparing binaural hearing with the monaural hearing in noise and/or 
reverberation (Bess and Tharpe, 1986; Hawkins, 1986; MacKeith and Coles, 1971; 
Markides, 1977). Researchers have identified the advantage of binaural squelch as a 3- to 
6-dB improvement in the SNR for speech (Carhart, 1965a, b; Gelfand and Hochberg, 
1976; Harris, 1965; Hirsh, 1948a; Licklider, 1948; MacKeith and Coles, 1971; Moncur 
and Dirks, 1967; Nordlund and Fritzell, 1963; Olsen and Carhart, 1967; Zurek, 1993). 

Licklider (1948) found a 2.5-dB binaural advantage for PB words in a background 
of white noise. Subjects consisted of four individuals (hearing status was not reported). 
Stimuli were presented under earphones 180° out-of-phase (antiphasic) and in-phase in 
the binaural condition. For the monaural condition, speech was presented to one ear 
while the noise was presented in-phase to both ears. Results showed, on the average, that 
binaural hearing allowed the same speech-perception ability of 50% to be obtained at 2.5 
dB SNR poorer than for monaural hearing. That is, binaural hearing "squelched" the 
noise by 2.5 dB. 



19 

Harris (1965) studied two groups of subjects (89 with normal hearing and 36 with 
hearing impairment) in an effort to determine the contribution of the second ear in a 
background of competing noise. Monotic and dichotic listening conditions were 
simulated via earphones by recording sentences (PAL, Psychoacoustic Laboratory, Test 
#8) from a loudspeaker positioned 12 feet directly in front of two microphones that were 
set 12 inches apart. For competing noise, he separately recorded a male and female talker 
each from one of two other loudspeakers set at 45° on either side of the microphones and 
in the same azimuth as the first loudspeaker. These recordings were then played back to 
the subjects under earphones creating simulated monaural and binaural conditions. 
Harris noted that his recordings afforded the subjects only temporal cue differences 
(phase relations) and did not address the intensity differences that would have been 
present if the recordings had been made with an artificial head interposed between the 
microphones (i.e. the head shadow effect). His results showed a statistically significant 
improvement in binaural over monaural speech-perception of 1 5 to 25% for the normal- 
hearing group and 25 to 33% for the hearing-impaired group. Harris also evaluated the 
PIF of the speech material and found a rise in speech-perception of 6% per dB 
improvement in SNR. Based on these data, he concluded that a 2.5- to 5.5-dB SNR 
advantage occurred due to the ears' ability to squelch noise and that individuals with 
hearing impairment possess this binaural advantage as do individuals with normal 
hearing. 

Olsen and Carhart (1967) used 12 individuals with normal hearing to study the 
squelch effect. In sound field the subjects listened to CNC (consonant-nucleus- 
consonant) words in the presence of competing sentences at SNRs of -18, -12, -6, and 



20 

dB. The stimuli and noise were delivered separately through two loud speakers located at 
45° azimuths on either side of the subject. The subjects were tested at each SNR under 
the following conditions: (1) monaural-indirect, (2) monaural-direct, and (3) binaural. 
Monaural conditions were created by introducing white noise via an insert receiver under 
an ear muff. The monaural-direct condition had an average of 43% less squelch effect 
that the monaural-indirect condition did at the two poorest SNRs. In addition, the 
binaural condition had superior results to the monaural-direct condition by an average of 
13% at the two poorest SNRs. Next, the authors calculated the slope of the PIF for 
speech discrimination and found it to be 3.1% per dB on the linear portion of the curve. 
Therefore, by dividing the binaural advantage over monaural-direct (13%) by the slope 
(3.1% per dB) they determined a binaural squelch advantage of 4 dB. 

MacKeith and Coles (1971) measured the squelch effect on eight subjects with 
normal hearing by presenting PB words and competing noise (from different speakers) in 
sound field at eight different azimuths equally spaced from 0° to 360°. Experimental 
conditions were (1) binaural, (2) near-ear, and (3) far-ear. The monaural conditions were 
created by placing an earmuff and earplug on one ear of the subject or by inserting 
uncorrelated masking noise (speech-spectrum noise) in the earmuff. The researchers 
measured the SNR needed to obtain a 50% speech-perception score for all three 
conditions and found a 3.7-dB average binaural improvement over the monaural near-ear 
condition. 

Gelfand and Hochberg (1976) also reported that binaural squelch is responsible 
for the improvement seen in speech-perception in noise and/or reverberation when using 
two ears instead of one. These authors studied two groups of 30 subjects each: (1) one 



21 

group with normal hearing, and (2) a group with SNHL. Speech-perception scores were 
obtained on all subjects (monaurally and binaurally) using the Modified Rhyme Test 
(MRT) under earphones with reverberation times (RT) of 0, 1, 2, and 3 seconds. For the 
group with hearing impairment the monaural stimuli were given at the ear's PB max (the 
presentation level that yields the highest performance for PB words) while the binaural 
presentation was set at the subject's PB max for the better ear. For the group with normal 
hearing the stimuli was given at 30 dB SL (re: SRT). Results showed statistically 
significant binaural advantages of 13.2% and 9.4% for the subjects with normal hearing 
and subjects with SNHL respectively. Using the PIF of 6% per dB SNR, these results 
show up to a 2.2-dB SNR binaural advantage. It should be noted, however, that Gelfand 
and Hochberg stated that their subjects were deprived of interaural cues in the binaural 
presentation. Thus, their subjects did not receive a true binaural presentation but rather a 
diotic presentation of the speech stimuli. This limitation could account for the lower 
squelch advantage in this investigation than is observed in other investigations. 

It is clear from these investigations that binaural squelch offers a 3-6 dB 
advantage over monaural listening. The small discrepancies in this range of estimates (3- 
6 dB) could readily be accounted for in the differences among the various studies such as 
sample size, stimuli, competing noise, and procedures. 
Binaural Release From Masking 

Binaural release from masking is the improvement in the threshold of a masked 
signal when interaural characteristics of the masker and/or the signal are changed 
(Agnew, 2000; Bess and Tharpe, 1986; Byrne, 1980; Konkle and Schwartz, 1981; Moore, 
1997). This phenomenon has also been referred to as masking level difference (MLD) 
(Byrne, 1980; Moore, 1997) and unmasking (Bronkhorst and Plomp, 1988, 1989). It 



22 

occurs because the binaural auditory system is able to identify the interaural phase and 
intensity differences of the signal and noise at the two ears (Agnew, 2000; Konkle and 
Schwartz, 1981). For example, a signal that is just barely masked by noise in one ear 
becomes audible when the noise is concurrently introduced to the opposite ear (Konkle 
and Schwartz, 1981). Hence, the terms, "binaural release from masking," "masking level 
difference," and "unmasking." Researchers have shown that binaural release from 
masking results in a 4- to 7-dB improvement in threshold for speech (Carhart, Tillman, 
and Johnson, 1967; Kaiser and David, 1960; Levitt and Rabiner, 1967a; Licklider, 1948; 
Schubert and Schultz, 1962). 

Schubert and Schultz (1962) measured a 33% binaural improvement over 
homophasic presentation in speech-perception in 1 5 subjects (hearing status not 
reported). Subjects listened under earphones to running speech (single words) against a 
broad-band random noise. Speech signals were presented under two conditions: (1) in- 
phase and (2) 180° out-of-phase. Noise was presented in-phase. Results showed a 33% 
improvement in speech-perception in the antiphasic condition. That is, when the 
interaural phase of the signal was changed from homophasic to antiphasic, a 33% 
increase in speech-perception was achieved. Using a 6% per dB PIF, the authors 
concluded that the 33% binaural release from masking accounted for approximately a 
5-dB binaural improvement in the SNR. 

Levitt and Rabiner (1967a) studied binaural release from masking by measuring 
speech-perception on four subjects (hearing status not reported). Subjects listened to 
binaural presentations under earphones of speech (CNC words) in a background of broad- 
band noise. The experimental conditions compared binaural homophasic presentations of 



23 

the signal and the noise with binaural presentations of the signal (in noise) that were 
antiphasic at various spectral or frequency band regions (broad-band, below 1000, 500, 
and 250 Hz). Results showed that in the conditions with speech antiphasic, there was an 
increase in speech-perception equivalent to a 6 dB increase in the SNR. The authors 
further noted that the frequency regions below 500 Hz were primarily responsible for this 
binaural release from masking. 

Carhart et al. (1967) demonstrated MLDs of 4 to 7 dB when the listening 
conditions were changed from binaural homophasic to binaural antiphasic. These authors 
measured the speech-perception of six subjects with normal hearing using monosyllabic 
and spondaic words in a background of noise (white noise or competing speech) under 
earphones. Binaural listening conditions included noise homophasic with speech 
homophasic and noise antiphasic with speech homophasic. Additional conditions 
included interaural time delays of the signal ranging from 0.1 to 0.8 milliseconds (ms). 
Results showed that the improvement in threshold (from homophasic to antiphasic) was 4 
dB for monosyllabic words and 7 dB for spondaic words. Threshold improvements were 
also observed as interaural time delays increased. However these threshold 
improvements as a function of increased interaural time delays were always poorer than 
the antiphasic condition. The authors concluded that this improvement demonstrates that 
the acoustical parameter most responsible for binaural release from masking is the 
interaural phase relation. 

In summary, studies have shown that binaural release from masking yields a 4- to 
7-dB advantage over listening in the same situation with only one ear. It should be noted 
that this 4- to 7-dB improvement with unmasking can only be measured under earphones 



24 

in a laboratory. In the real-world, interaural phase differences occur at random and 

cannot be well controlled (Bess and Tharpe, 1986; Konkle and Schwartz, 1981). 

Nevertheless, the evidence strongly suggests that the increase in speech-perception 

observed in sound field testing, where the noise and speech are spatially separated is due, 

at least in part, to binaural release from masking. Therefore, it can be said that binaural 

release from masking is the laboratory expression of the squelch effect (Bess and Tharpe, 

1986). 

Summary of Binaural Advantages 

While each of these binaural phenomena (summation, head shadow, localization, 

squelch, and release from masking) are usually studied in isolation, each is active and 
interacting with the others, in every day listening, to yield a complete advantage for the 
binaural listener. In summary, binaural hearing gives: 1) summation of intelligibility 
equivalent to a 3- to 6-dB threshold improvement in quiet, 2) improvement in speech- 
perception from the head shadow effect of up to 6 dB in quiet and 13 dB in noise, 3) 
enhanced localization, 4) binaural squelch of 3- to 6-dB improvement in SNR, and 5) 
binaural release from masking equivalent to 4- to 7-dB improvement in threshold. 

Other binaural advantages 

In addition to binaural summation, head shadow effect, localization, binaural 
squelch, and binaural release from masking, there are many other commonly reported 
advantages of binaural hearing. These include spatial balance, increased ability to 
identify common sounds, ease of listening, better quality of sound (to include a greater 
clarity and fullness of sound), and increased readiness and certainty with listening 
(Bergman, 1957; Bocca, 1955; Byrne, 1980; Feuerstein, 1992; Gelfand and Hochberg, 



25 

1976; Groen and Hellema, 1960; Koenig, 1950; Libby, 1980; Lochner and Burger, 1961; 
Markides, 1977; Mueller and Hawkins, 1995;Nabelek, 1993). 

Unilateral Hearing Loss and Speech Perception 

Clearly, studies have shown that binaural hearing yields many advantages over 
monaural hearing in regard to speech-perception. For example, Bess et al. (1986) found 
significantly reduced speech-perception ability in a group of 25 children with unilateral 
SNHL. Specifically, these authors evaluated the speech-perception ability of a group of 
25 children with normal hearing and 25 age-matched children with unilateral SNHL. The 
children listened to nonsense syllables in various backgrounds of cafeteria noise. Both 
groups were tested in monaural-direct condition (better ear toward the speech and poorer 
ear toward the noise). The group with unilateral SNHL was also tested in monaural- 
indirect condition (poorer ear toward the speech and the better ear toward the noise). 
Results showed that the group with normal binaural hearing had significantly better 
speech-perception scores than the group with unilateral SNHL in both the monaural- 
indirect and monaural-direct at all SNRs. The average speech-perception improvement 
for binaural over monaural-direct was 7.3% and for binaural over monaural-indirect was 
28. 1%. Furthermore, the results showed that as the listening condition worsened (i.e. 
poorer SNR), the greater was the disparity between the binaural speech-perception and 
either monaural speech-perception. 

Asymmetrical Hearing Loss and Speech Perception 

Despite these aforementioned studies on the advantages of binaural hearing, few 
researchers have specifically examined the impact of asymmetrical hearing loss on the 
advantages of binaural hearing. For example, a few investigators (Keys, 1947; Pollack, 
1948; Shaw et al., 1947) compared binaural summation when the signal was equated at 



26 

both ears (equal SL) and when the signal was not equated at both ears (equal HL). The 
equating of signal levels between the ears of a listener with unequal hearing attempts to 
reduce the asymmetry of hearing. Conversely, presenting the signal at the same HL to a 
listener with unequal hearing between the ears tends to preserve the asymmetry. These 
authors determined that while the binaural summation is substantially reduced for cases 
of unequal hearing (asymmetry), it is not entirely eliminated. 

Keys (1947), discussed previously, found that when the ears of the subjects with 
SNHL were equated (Method II) there was a statistically significant 4. 1-dB binaural 
summation for speech. When the ears were not equated (Method I), the average binaural 
summation was only 1.5 dB. However, the author noted that for subjects who had an 
asymmetry of 6 dB or less, Method I yielded a binaural summation of 2.86 dB while 
those subjects with asymmetry greater than 6 dB had a binaural summation of 0.77 dB. 
Even though the increase in binaural summation from Method II to Method I was not 
significant, these numbers suggest that the binaural advantage is not completely 
eliminated for individuals with asymmetrical hearing loss. 

Shaw et al. (1947), described previously, tested two groups of subjects with 
normal hearing. Both groups were presented with a series of pure-tones under earphones 
in binaural and monaural conditions. For Group 1(10 subjects) the stimuli was presented 
at equal HL and for Group II (13 subjects) the stimuli was presented at equal SL. These 
authors reported a 4- to 5-dB average pure-tone threshold difference. Results showed a 
significant binaural summation of 3.6 dB for Group II (equated hearing) while Group I 
obtained a 1- to 2-dB binaural lowering of threshold. The authors concluded that even 
though a greater binaural summation can be shown if the two ears are equated than if the 



27 

two ears are not equated, the unequated condition still yields a binaural summation. Even 
though this investigation does not deal directly with asymmetrical hearing loss, the 
authors observed that even for a group of listeners with substantially normal hearing, a 
true binaural presentation (as if from sound field) would result in unequal hearing at the 
two ears and that this condition yields a 1- to 2-dB lowering of the pure-tone threshold 
over a monaural threshold. 

Pollack (1948) measured binaural summation on 10 subjects with normal hearing 
under earphones using white noise. Binaural and monaural presentations were made. In 
Part I, the signal was presented at equal dB SLs to the two earphones for the binaural 
presentation. In Part II, the signal was presented at unequal dB SLs (or mismatched) at 
the two earphones for the binaural presentation. The mismatching of presentation levels 
in Part II covered a range from to 12 dB. Results showed a 2. 1 dB binaural summation 
for Part I (equal SL). Part II showed a graduated reduction in summation as the 
discrepancy between the ears increased. Specifically, when the ears were mismatched by 
3 dB the binaural advantage was 0.8 dB but when the mismatch was 12 dB the 
summation was 0.2 dB. 

Bocca (1955) provided evidence of binaural advantage (and binaural integration) 
in the case of unequal hearing by measuring the effect of binaural presentation on speech- 
discrimination scores in a group of 14 subjects with normal hearing. He presented a 
series of bisyllabic words to his subjects via earphones under four conditions: (1) 
condition A (monaural)- words presented 5 dB below threshold; (2) condition B 
(monaural)- distorted words (low-pass filtered with 500 Hz cut-off) presented 45 dB 
above threshold; (3) condition C- a monaural presentation of conditions A and B 



28 

combined; and (4) condition D- a binaural presentation of conditions A and B combined 
where each ear received both conditions A and B. Conditions A and B together represent 
unequal hearing at the two ears. Results showed a 36% increase in speech-perception 
scores in the true binaural presentation (Condition D) over the combined monaural 
presentation (Condition C). The author noted that this increase was the binaural 
summation effect as reported by others. The author further noted that while there was no 
change in speech-perception scores in the combined monaural presentation (Condition C) 
over the distorted-only monaural presentation (Condition B), the subjects reported a 
"much clearer" sound in the delivery of the speech stimuli. The author concluded that 
not only was there a summation effect for the binaural presentation but also a binaural 
integration effect. That is, a subjective increase in the quality of sound when listening 
with two ears, as opposed to one. Using a 6% per dB ratio for determining the PIF (Bess 
and Tharpe, 1986; Carhart, 1967; Konkle and Schwartz, 1981), this 36% improvement 
for binaural speech-perception would represent a 6-dB increase in binaural sensitivity. 
Nabelek and Mason (1981) studied the effect of noise and reverberation on the 
speech-perception of 15 subjects with bilateral SNHL, five of whom had asymmetrical 
SNHL. Asymmetrical SNHL was defined as interaural threshold differences equal to or 
greater than 15 dB for at least two frequencies between 250-8000 Hz (at octave 
intervals). All subjects were presented with a speech signal (the Modified Rhyme Test, 
MRT) in a background of noise (speech babble). Speech and noise were presented 
separately from two loudspeakers located at +/- 30° azimuth. All subjects were tested 
under four conditions: (1) aided and unaided, (2) SNR of +30, +10, +5, 0, and -5 dB, (3) 
binaural and monaural, and (4) reverberation times (RT) of 0. 1 and 0.5 seconds. The 



29 

monaural condition was created by means of an earplug and an earmuff. Results showed 
that binaural speech-perception scores were significantly higher than were monaural 
scores for all conditions. Across all SNRs, the binaural advantage was approximately 6% 
and was not dependent on any of the other conditions. The authors also attempted to 
determine if a difference existed in speech-perception between the group of five subjects 
with asymmetrical SNHL and the group of 10 subjects with symmetrical SNHL. The 
authors concluded that even though the number of subjects was too small to rely on 
statistics, the group with asymmetrical SNHL showed a trend toward greater 
susceptibility in noise and reverberation. It appears, however, that no attempt was made 
by the authors to match subjects (or groups) by hearing threshold. 

It appears, from the foregoing review, that individuals with asymmetrical hearing 
loss, while not achieving the full benefit of binaural hearing, are capable of some of the 
advantages of binaural hearing. However, the impact of asymmetrical hearing loss on the 
binaural advantage has yet to be directly measured. Therefore, the purpose of this present 
investigation was to determine the effect(s) of asymmetrical hearing loss on speech 
perception in noise at different azimuths. The following experimental question was 
explored in this project: 

What are the effects of symmetrical and asymmetrical SNHL on speech 
perception in noise at various azimuths of speech and noise presentation? 



CHAPTER 3 
MATERIALS AND METHODS 

The purpose of this study was to determine the effects of symmetrical SNHL and 
asymmetrical SNHL on speech-perception ability in noise. Subjects consisted of 16 
listeners with symmetrical SNHL and 16 listeners with asymmetrical SNHL. An 
adaptive procedure was used to assess the SRT in noise. Speech-spectrum shaped noise 
was used as the competing stimulus. Speech and noise were presented in sound field 
from spatially separated loudspeakers at various azimuths on the same horizontal plane. 

Materials 
Subjects 

Thirty-two adult listeners, 16 with symmetrical SNHL and 16 with asymmetrical 
SNHL, were paid to serve as subjects for this experiment. All subjects exhibited pure- 
tone thresholds equal to or better than 25 dB HL at 250 Hz and 500 Hz. Subjects with 
symmetrical SNHL exhibited no greater than a 15-dB difference in thresholds between 
ears at any one frequency from 250 Hz to 8000 Hz. Only three subjects with symmetrical 
SNHL exhibited a 15-dB separation between ears at one frequency. All other 
symmetrical subjects (n=13) exhibited no more than a 10-dB difference between ears at 
any frequency. Figure 3-1 shows the individual audiograms for the symmetrical subjects. 
Subjects with asymmetrical hearing loss exhibited a 15-dB or greater difference in 
thresholds between ears at two or more frequencies from 250 Hz to 8000 Hz. In the 
asymmetrical group, one subject had an asymmetrical SNHL (15 dB or greater) at four 



30 



31 



Frequency (Hz) 
2k 3k 




(a) 



250 



500 



Ik 



Frequency (Hz) 

2k 3k 



4k 



6k 



8k 



10 
20 
30 
40 
50 
60 
70 
80 
90 
100 



^ 



>B 



^% 



g— g— 3 



O Right Ear 
X Left Ear 



(b) 

Figure 3-1. Individual Audiograms for Symmetrical Subjects: (a) Subject SI, 
(b) Subject S2, (c) Subject S3, (d) Subject S4, (e) Subject S5, (f) Subject S6, 
(g) Subject S7, (h) Subject S8, (i) Subject S9, (j) Subject S10, (k) Subject Sll, 
(1) Subject S12, (m) Subject S13, (n) Subject S14, (o) Subject S15, (p) Subject S16 



32 




10 
20 
30 
40 
50 
60 
70 
80 
90 
100 



250 



500 



Ik 



Frequency (Hj) 

2k 3k 



4k 



6k 



8k 



« » 




Net'k 




O Right Ear 
XLeft Ear 



(c) 




10 
20 
30 
40 
50 
60 
70 
80 
90 
100 



250 



500 



Ik 



Frequency (Hz) 
2k 3k 



4k 



6k 



8k 







« 



N 



& 



O Right Ear 
XLeft Ear 



(d) 



Figure 3-1 Continued 



33 



250 



500 



Ik 



Frequency (Hz) 
2k 3k 



4k 



6k 



8k 



10 
20 
30 
40 
50 



S 60 



B — » 



^ 



^ 



^ 




70 

80 

90 

100 



TO> 



O Right Ear 
X Left Ear 



TO 



(e) 



Frequency (Hz) 
2k 3k 





" 70 




80 
90 




100 




CD 



Figure 3-1 Continued 



34 



Frequency (Hz) 
2k 3k 




(g) 



10 
20 

30 

I 

— 

2 50 

"o 

| 60 
" 70 

80 
90 

100 



250 



£ 



500 



=» 



Ik 



Frequency (Hz) 

2k 3k 



4k 



6k 



8k 



^ 



^ 




g^* # 



-X 



O Right Ear 
X Left Ear 



(h) 



Figure 3-1 Continued 



35 



Frequency (Hz) 
2k 3k 




(0 



10 
20 
30 
40 
50 
60 
70 
80 
90 
100 



250 



500 



Ik 



Frequency (Hz) 
2k 3k 



4k 



6k 



8k 



-& 



^18^3 




O Right Ear 
X Left Ear 



^ 



0) 



Figure 3-1 Continued 



36 



Frequency (Hz) 
2k 3k 




00 



I) 

10 
20 

M 

4(1 
50 

60 
70 

n 

90 
100 



250 



500 



Ik 



Frequency (Hz) 
2k 3k 



4k 



B 



^S=» 



ft 



8k 




O Right Ear 
XLeft Ear 






(1) 



Figure 3-1 Continued 



37 



o 

10 
20 
30 
40 
50 
60 
70 
80 
90 
100 



250 



500 



Ik 



Frequency (Hz) 
2k 3k 



4k 



6k 



8k 



=^» 



B 




»• 



« 



^S 



—a 



O Right Ear 
XLeft Ear 



(m) 







250 



500 



Ik 



Frequency (Hz) 
2k 3k 



4k 



6k 



8k 



10 
20 
30 
40 
50 
60 
70 
80 
90 
10(1 






^ 



^g B 



^ 



55 



^ 



O Right Ear 
XLeft Ear 



(n) 



Figure 3-1 Continued 



38 



o 



250 



50(1 



Ik 



Frequency (Hz) 

2k 3k 



4k 



6k 



8k 



10 
20 
30 
40 
50 
60 
70 
80 
90 
100 J 



^^S 




O Right Ear 
XLeft Ear 



(o) 



Frequency (Hz) 

2k 3k 




(P) 



Figure 3-1 Continued 



39 

frequencies, 1 1 subjects had an asymmetrical SNHL at three frequencies, and three 
subjects had an asymmetrical SNHL at two frequencies. Figure 3-2 shows the individual 
audiograms for the asymmetrical subjects. Figure 3-3 shows the average audiogram for 
the symmetrical group and asymmetrical group. These data are also presented in Table 
3-1. As can be noted from Table 3-1, the BE (better ear) thresholds of the subjects with 
asymmetrical SNHL were matched by frequency (from 250 Hz to 8000 Hz) within 5 dB 
of the right and left ear average for the subjects with symmetrical SNHL. The mean age 
for the symmetrical group was 61.9 years with a standard deviation of 9. 1 years and a 
range of 45-74 years. The mean age for the asymmetrical group was 63.0 years with a 
standard deviation of 8.0 years and a range of 52-75 years. As can be noted, the mean 
age for the two groups was matched +/- 5 years. Demographic data (age, gender, 
duration of hearing loss, duration of hearing aid use, and suspected etiology) for the 
symmetrical subjects are shown in Table 3-2. The same demographic data for the 
asymmetrical subjects are shown in Table 3-3. 

In addition to the above, all subjects met the following criteria: (1) normal 
tympanometric results as indicated by middle-ear pressure (+/- 150 mm daPa), (2) 
acoustic reflex thresholds (ARTs) commensurate with the pure-tone thresholds (Silman 
and Silverman, 1991), (3) negative acoustic reflex (AR) decay if ARTs allowed for such 
assessment, (4) negative tone decay (Olsen and Noffsinger, 1974) if AR decay could not 
be assessed, (5) word-recognition scores equal to or better than 70% in quiet between 40 
and 65 dB SL (re: SRT), and (5) native American English speaker. 

Thirty-two of 72 possible subjects that were tested met the above criteria and 
were included in the study. The subject pool was selected from the population base in the 



(a) 



40 



Frequency (Hz) 



250 



500 



Ik 



2k 



3k 



4k 



6k 



8k 




Frequency (Hz) 



ca 



o 

■S 




(b) 



Figure 3-2. Individual Audiograms for Asymmetrical Subjects: (a) Subject Al, 
(b) Subject A2, (c) Subject A3, (d) Subject A4, (e) Subject A5, (f) Subject A6, 
(g) Subject A7, (h) Subject A8, (i) Subject A9, (j) Subject A10, (k) Subject All, 
(1) Subject A12, (m) Subject A13, (n) Subject A14, (o) Subject A15, (p) Subject A16 



41 



Frequency (Hz) 



X 

sa 

■a 



o 




(c) 



(d) 



Frequency (Hz) 




Figure 3-2 Continued 



42 



Frequency (Hz) 
2k 3k 




(e) 



Frequency (Hz) 
2k 3k 




(0 



Figure 3-2 Continued 



43 









250 



500 



Ik 



Frequency (Hz) 
2k 3k 



4k 



6k 



8k 




(g) 





10 
20 
30 
40 
50 
60 
70 
80 
90 
100 



250 



500 



Ik 



Frequency (Hz) 

2k 3k 



4k 



6k 



8k 




*^HL ° 



O Right 
XLeft 



(h) 



Figure 3-2 Continued 



251) 



44 



50(1 



Ik 



Frequency (Hz) 

2k 3k 



4k 



6k 



8k 




0) 



Frequency (Hz) 
2k 3k 




0) 



Figure 3-2 Continued 



45 



Frequency (Hz) 

2k 3k 




00 



Frequency (Hz) 
2k 3k 










(1) 



Figure 3-2 Continued 



46 



Frequency (Hz) 
2k 3k 




(m) 



Frequency (Hz) 
2k 3k 




(n) 



Figure 3-2 Continued 



47 









250 



500 



Ik 



Frequency (Hz) 
2k 3k 



4k 



6k 



8k 




(O) 



Frequency (Hz) 
2k 3k 




(P) 



Figure 3-2 Continued 



250 



48 



500 



Ik 



Frequency (Hi) 
2k 3k 



4k 



6k 



8k 




(a) 



Frequency (Hz) 

Ik 2k 3k 




(b) 



Figure 3-3. Average Audiogram for Symmetrical Subject Group and Asymmetrical 
Subject Group: (a) Symmetrical Subjects, (b) Asymmetrical Subjects 



49 



Table 3-1. Hearing Thresholds for 
Symmetrical Subjects and Asymmetrical Subjects 



Symmetrical 
Subjects 


250 
Hz 


500 
Hz 


1 
kHz 


2 
kHz 


3 
kHz 


4 
kHz 


6 
kHz 


8 
kHz 


Right Ear 


13.4 


12.8 


20.6 


32.2 


48.1 


57.5 


61.9 


66.3 


Left Ear 


12.2 


11.9 


18.1 


30.6 


50.6 


57.8 


62.8 


66.3 


Averaged 

Bilateral 

Thresholds 


12.8 


12.3 


19.4 


31.4 


49.4 


57.7 


62.3 


66.3 


Average 

Degree of 

Asymmetry 


1.2 


0.9 


2.5 


1.6 


2.5 


0.3 


0.9 


0.0 



Asymmetrical 
Subjects 


250 
Hz 


500 
Hz 


1 
kHz 


2 
kHz 


3 
kHz 


4 
kHz 


6 
kHz 


8 
kHz 


Better Ear 


10.3 


10.3 


14.1 


32.2 


52.5 


58.8 


64.4 


68.1 


Poorer Ear 


10.9 


10.6 


20.3 


47.2 


66.3 


73.8 


76.9 


76.6 


Averaged 

Bilateral 

Thresholds 


10.6 


10.5 


17.2 


39.7 


59.4 


66.3 


70.6 


72.3 


Average 

Degree of 

Asymmetry 


0.6 


0.3 


6.3 


15.0 


13.8 


15.0 


12.5 


8.4 



Hearing Thresholds in dB HL. 



50 



Table 3-2. Demographic Data for the Symmetrical Subjects 



Subject 


Age 
(Years) 


Gender 


Duration 

of 

Hearing 

Loss* 

(Years) 


Duration of 

Hearing 

Aid Use* 

(Years) 


Suspected 
Etiology 


SI 


45 


M 


20 





Genetic 


S2 


62 


F 


20 


20 


Noise 


S3 


47 


M 


20 





Noise 


S4 


63 


F 


20 





Ototoxicity 


S5 


72 


M 


5 


2 


Noise 


S6 


74 


M 


1 


1 


Noise 


S7 


69 


M 


20 


2 


Noise 


S8 


55 


M 


20 


15 


Noise 


S9 


59 


M 


40 


7 


Noise 


S10 


56 


M 


33 


2 


Noise 


Sll 


50 


M 


15 


1 


Noise 


S12 


64 


M 


30 


1 


Noise 


S13 


71 


M 


45 


3 


Noise 


S14 


68 


M 


10 





Noise 


S15 


68 


M 


30 


3 


Noise 


S16 


68 


M 


30 





Noise 


Average 


61.9 




22.4 


3.5 




Standard 
Deviation 


9.1 




11.9 


5.8 





(* = As reported by subject). 



51 



Table 3-3. Demographic Data for the Asymmetrical Subjects 



Subject 


Age 
(Years) 


Gender 


Duration 

of 

Hearing 

Loss* 

(Years) 


Duration 

of 
Hearing 

Aid Use* 
(Years) 


Suspected 
Etiology 


Al 


56 


F 


"many 
years" 


4 


Noise 


A2 


72 


M 


30 





Noise 


A3 


61 


M 


6 





Noise 


A4 


75 


F 


3 





Unknown 


A5 


61 


M 


20 


7 


Noise 


A6 


75 


M 


56 





Unknown 


A7 


74 


M 


25 


5 


Noise 


A8 


69 


M 


50 





Noise 


A9 


59 


M 


30 





Noise 


A10 


53 


M 


30 


5 


Noise 


All 


59 


M 


25 


2 


Noise 


A12 


53 


M 


50 





Genetic, 
Noise 


A13 


52 


M 


15 


0.5 


Noise 


A14 


64 


M 


45 


1 


Noise 


A15 


59 


M 


25 





Noise 


A16 


66 


M 


15 


0.5 


Noise 


Average 


63.0 




28.3 


1.6 




Standard 
Deviation 


8.0 




16.0 


2.3 





(* = As reported by subject). 



52 



geographic areas of North Florida and Southern Georgia. Potential subjects were initially 
identified from a review of an estimated 2500 clinic files at the University of Florida 
Speech and Hearing Clinic (UFSHC) and from the Malcolm Randall Veterans Affairs 
(VA) Audiology Clinic (Gainesville, Florida). Approximately 500 individuals from these 
files were identified as potential subjects and contacted by phone and/or letter to solicit 
their participation in the study. Seventy-two subjects kept appointments and received 
audiological examination and filled out a case history questionnaire. Of these 72 
subjects, 32 met the subject selection criteria and exhibited audiometric threshold that 
met the criteria for inclusion into one of the two groups: symmetrical SNFfL or 
asymmetrical SNHL. The process for obtaining subjects is depicted by the flow chart in 
Figure 3-4. This study was granted approval from the University of Florida Institutional 
Review Board (IRB) on November 30, 1999. Approval to solicit veterans from the VA 
Audiology Clinic was also granted from the VA Research and Development Board on 
February 14, 2000. Prior to participating in the study, each subject was required to sign 
the University of Florida approved Informed Consent Form (Appendix A). 
Test Stimuli 

The Hearing in Noise Test (HINT) (Nilsson et al., 1994) was used as the speech 
stimuli. The HINT was developed from the Bamford-Kowal-Bench (BKB) sentences 
(Bench and Bamford, 1979) designed initially to assess speech-perception in British 
children with hearing impairment. These sentences use common nouns and verbs found 
in transcriptions of British children's speech. The sentences were translated from British 
to American English to remove British English idioms (Nilsson et al., 1994). The HINT 
consists of 25 phonemically balanced sentence lists with 10 sentences in each list. The 



53 



Reviewed Files 
(2500 Files*) 



I 



Contacted 
Potential Subjects 
(500 Individuals*) 



I 



Scheduled 
Appointments 
(72 Subjects) 



I 



Audiological 

Exam and Case 

History 

(72 Subjects) 



I 



Qualified Subjects 

Divided into Two 

Groups 

(32 Subjects) 



(*Estimated). 

Figure 3-4. Subject Selection 



Referred for 

Medical 
Management 
(3 Subjects) 



54 

sentences have been equated for difficulty when presented in quiet or in noise (Soli and 
Nilsson, 1994). All HINT sentences are uniform in length (six to eight syllables each) 
and are constructed at the first grade reading level. Additionally, all HINT lists have 
been shown to be of equal intelligibility when presented in spectrally matched noise and 
have high test-retest reliability (Nilsson et al., 1994). A sample HINT sentence is: "The 
silly boy was hiding." 

A complete list of all the HINT sentences can be found in Appendix B. A commercially- 
available Compact Disc (CD) recording of the HINT (obtained directly from the House 
Ear Institute, Los Angeles, California) was used. A 1000-Hz calibration signal (narrow- 
band noise), consistent with the root mean square (RMS) of the HINT sentences, is 
recorded on Track 37 of the CD. The HINT was chosen as the stimulus for this study 
because it is representative of "everyday" running speech (which accounts for natural 
SPL fluctuations, intonations, pauses, and spectral weighting in speech), has been 
standardized with competing noise, and lends itself to an adaptive testing procedure 
(Nilsson et al., 1994). 
Competing Stimuli 

Speech-spectrum shaped noise was used as the competing stimuli. The noise was 
generated by computing the average long-term spectrum of the HINT sentences. This 
ensured that the average SNR (between the speech signal and the noise) was equated 
across frequencies (Nilsson et al., 1994). The noise is available on the second channel of 
the commercially available CD. Speech-spectrum shaped noise was developed for the 
HINT for the following reasons: (1) speech-spectrum shaped noise provides maximum 
masking of the speech signal because it has the same spectral characteristics as the speech 
stimuli, (2) speech-spectrum shaped noise is representative of many "everyday" listening 



55 

environments, and (3) steeper intelligibility functions are obtained with speech-spectrum 
shaped noise than with other types of noise (Nilsson et al., 1994; Prosser, Turrini, and 
Arslan, 1991). 

Experimental Conditions 
Speaker-Listener Orientation 

Three speaker-listener orientations, as depicted in Figure 3-5, were used in this 
experiment. These conditions were: (1) speech at 0° and noise at 180°, (2) speech at 45° 
and noise at 225°, and (3) speech at 315° and noise at 135°. The 0°-180° orientation was 
used because it represents a realistic listening situation in which the listener attempts to 
place the talker to the front and the interfering noise to the rear (Hawkins and Yacullo, 
1984; Jerger and Hayes, 1976). Except in unusual circumstances, the head is not fixed in 
one position and the listener is at liberty to rotate his/her head to optimize the listening 
condition as needed. The orientations of 45°-225° and 3 1 5°-135° represent the listener's 
ability to move his/her head up to +/- 45° while still maintaining eye contact with the 
talker (Boney, 1987). 

These speaker-listener orientations were achieved with two loudspeakers by 
placing the subject equidistant between two loudspeakers, one situated in front of the 
subject and the other behind the subject. All angles were relative to the midline of the 
subject's head and were measured with a protractor at the position of the midpoint of the 
subject's head (with the subject absent). Distance from the midpoint of the subject's 
head to each loudspeaker was lm. This placed the subject within the critical distance of 
the room. The critical distance is that point in a room in which the intensity of the direct 
sound is the same as the intensity of the reflected sound (Klein, 1971; Peutz, 1971). 



56 



J— ^ 0° 



E 

o 




| f 180° 



^^ 



45° 



N. 



45° 



o 




1 [ 225° 



^^ 



S 
315° 




] \ 135° 



Figure 3-5. The three experimental conditions in the sound booth. The left panel 
depicts the HINT sentences delivered at 0° and the speech noise delivered at 180°. 
The middle panel depicts the HINT sentences delivered at 45° and the speech noise 
delivered at 225°. The right panel depicts the HINT sentences delivered at 315° 
and the speech noise delivered at 135°. (S = HINT sentences; N = Speech Noise) 






57 

Thus, by placing the subject within the critical distance, the effects of reverberation were 
minimized. The critical distance (Dc) is defined by the following formula: 

Dc = 2VVn/RT 
where V = volume of the room in m 3 , n = number of sources, and RT = reverberation 
time (RT 6 o) of the room at 1400 Hz. The critical distance, then, for the sound booth used 
in this experiment was approximately 1.5m. 

In the 0°-180° orientation, the subject sat facing the speech loudspeaker with the 
noise loudspeaker directly behind the subject. This orientation was referred to as midline 
(MID). In the 45°-225° orientation the subject was rotated 45° to the left. In the 315°- 
135° orientation the subject was rotated 45° to the right. The right orientation and left 
orientation were designated "better-ear" (BE) and "poorer-ear" (PE) depending on 
whether the subject's BE or PE was toward the speech loudspeaker. When the right ear 
was the BE and was oriented toward the speech loudspeaker (the 45°-225° orientation) 
this was the better-ear condition. When the left ear was the PE and was oriented toward 
the speech loudspeaker the 3 15°- 13 5° orientation this was the PE condition. Subject 
head movement was minimized by requiring each subject to wear a neck brace (the type 
commonly used with neck injury patients) during the presentation of the stimuli. 
Additionally, subjects were instructed to sit upright in a comfortable position in the chair 
and to look straight ahead at a designated location on the sound booth wall during the 
presentation of the stimuli. Subjects were observed through the sound booth window 
during the experimental conditions to ensure proper test position. 

To assure correct playback levels for the HINT sentences and speech spectrum 
shaped noise at the subject's ear, the output level for the 1000-Hz narrow-band noise 



58 

calibration signal was measured in the sound field. This measurement was taken by 
placing a sound-level meter (SLM) (Bruel and Kjaer (B&K), Type 2235, Model IEC- 
65 1) with !/2-inch microphone (B&K, Type 4176) at the point in space where the 
subject's head would be, but with the subject absent. The SLM was positioned 
perpendicular to the loudspeaker so that the calibration tone struck the microphone at 90° 
incidence. To assure that the stimulus level was the same for each subject, this 
calibration process was repeated at the beginning of each test session. 
Experimental Procedures 

Prior to participation in the experimental conditions, each subject was given a 
diagnostic audiometric evaluation (e.g., air conduction and bone conduction, SRT, word 
recognition, tympanometry, ART, AR decay if possible, and tone decay if needed). All 
testing, with the exception of the immittance testing, was conducted in an Industrial 
Acoustics Company (I AC) double- walled sound-treated room that met ANSI (1991) 
standards. Pure-tone thresholds were established at 250, 500, 1000, 2000, 3000, 4000, 
6000, and 8000 Hz in both ears. Speech recognition thresholds were established via 
monitored live voice (MLV) using spondaic words, followed by word recognition testing 
via MLV using Northwestern University Auditory Test number 6 (NU-6) word lists. 
Tone decay testing (Olsen and Noffsinger, 1974) was performed as an additional check 
against retrocochlear pathology if AR decay (discussed below) could not be obtained. In 
addition to audiometric testing, immittance testing was conducted on each subject prior to 
participation in the experimental paradigm. Immittance testing consisted of 
tympanometry, AR thresholds, and AR decay (when possible). 

In addition to obtaining the above mentioned audiometric information, each 
subject filled out a case history questionnaire requesting such information as duration of 



59 

hearing loss, etiology of hearing loss, and hearing aid use. Furthermore, each subject 
responded to questions regarding presence or absence of tinnitus, difficulties related to 
speech perception in noise, and possible retrocochlear pathology. 

Following the audiometric evaluation, speech-perception testing in the various 
experimental conditions was conducted. Speech and noise were played on separate 
channels of a CD player (Sony, Model CDP-XA1ES), routed through a Grason-Stadler 
Instruments (GSI, Model 16) audiometer and presented to the subject through a pair of 
GSI loudspeakers (Model 1761-9630). Figure 3-6 shows a diagram of the equipment set- 
up. 

Subjects were seated in the sound booth and instructions were given orally before 
the testing began. Subjects were instructed to repeat each HINT sentence exactly as they 
heard it. Guessing was encouraged. For this exercise, two 10 sentence HINT lists were 
combined to make one list. This resulted in twelve 20-sentence HINT lists. Three HINT 
lists (of 20 sentences) were given in each speaker-listener orientation as practice lists to 
minimize learning effects. For the experimental paradigm, three different HINT lists (of 
20 sentences) were given, one in each of the three speaker-listener orientations. List 
order and speaker-listener orientation were randomized for each subject. To avoid 
learning effects, no subject received any sentence list more than once. 

An adaptive test procedure was used to assess speech-perception because past 
researchers have shown that adaptive testing procedures have greatly improved reliability 
and significantly reduced variability over percentage-correct procedures in speech- 
perception testing (Crandell, 1991; Crandell and Boney, 1999; Dirks, Morgan, and 
Dubno, 1982; Duquesnoy, 1983; Nilsson et al., 1994; Ostler and Crandell, 1999; Plomp, 



60 



1.0 m 



Sound Booth 




Audiometer 
GSI16 



Sony Compact 
Disc Player 



HINT Compact 
Disc 



1.0 m 



N 



Window 



(S = HINT Sentences; N = Speech Noise). 



Figure 3-6. Diagram of Equipment Set-Up 






61 

1978, 1986; Plomp and Mimpen, 1979; Van Tasell and Yanz, 1987). For the adaptive 
procedure, the competing noise level was kept constant, while the speech signal was 
varied in an up-down manner according to the subject's response. Seventy dB SPL was 
selected as the noise level because it is representative of many everyday noise levels 
(Crandell, 1991). 

The adaptive paradigm, as outlined below, was followed for each subject: 

1. Sentence number 1 was presented at -10 dB SNR and replayed in 4-dB higher 
increments until it was repeated correctly 

2. Sentence number 2 was then presented 4 dB lower 

3. If it was repeated correctly, sentence number 3 was presented 4 dB lower 

4. If sentence number was repeated incorrectly, then sentence number 3 was 
presented 4 dB higher 

5. The level for sentence number 4 was determined in the same manner 

6. The first sentence of the test phase (number 5) is presented 4 dB lower or higher 
than the ending level of the initial phase depending on whether sentence number 4 
was repeated correctly 

7. The remaining 15 sentences are presented in 2-dB step size changes (e.g., if 
sentence number 5 were repeated correctly then sentence number 6 would be 
presented 2 dB lower. 



62 

The SRT (in dB SPL) for each HINT list was determined by averaging the 
thresholds of sentences 5 through 21. Note that while there was no sentence 21, its level 
was calculated based on the subject's response to sentence 20. That is, if the subject 
responded correctly to sentence number 20, a level 2 dB lower was recorded as the 
presentation level for the hypothetical sentence 21. Conversely, if the subject responded 
incorrectly to sentence number 20, a level 2 dB higher was recorded as the presentation 
level for the hypothetical sentence 2 1 . 

A SNR at which the listener could perceive 50% correct can readily be 
determined by noting the difference between the average SRT for a particular orientation 
and the background noise level (70 dB SPL). For example, if the SRT for a given MID 
condition was 68.5 dB SPL, then the SNR would be -1.5 dB. 

Statistical Analysis 

The dependent variable (SRT) was analyzed using a two-factor repeated measures 
analysis of variance (ANOVA) procedure. The two factors were Group (symmetrical 
SNHL/asymmetrical SNHL) and Condition (speaker-listener orientation). The covariates 
of age, gender, degree of hearing loss, duration of hearing loss, duration of hearing aid 
use, and etiology of hearing loss were analyzed using an analysis of covariance 
(ANCOVA) in order to determine their influence on the dependent variable. 
Additionally, the demographic markers age, duration of hearing loss, and duration of 
hearing aid use were analyzed using paired comparison t-test procedures, while gender 
and etiology of hearing loss were analyzed using chi-square procedures. All analyses 
were tested at alpha = 0.05. 



CHAPTER 4 
RESULTS 

This investigation measured the effects of symmetrical SNHL and asymmetrical 
SNHL on speech perception in noise at varying azimuths of presentation for HINT 
sentences presented in a background of speech-spectrum noise. The dependent variable, 
SRT (in dB SPL), was assessed using an adaptive procedure. A subject history 
questionnaire was used to collect a demographic profile for each participant in the study. 
Additionally, pure-tone hearing sensitivity data was obtained via audiometry. A 
repeated-measures two-factor ANOVA procedure was employed to examine differences 
between Group (subjects with symmetrical SNHL/subjects with asymmetrical SNHL) 
and Condition (speech at 0° and noise at 180°, speech at 45° and noise at 225°, and speech 
at 3 15° and noise at 135°). In addition, an ANCOVA was utilized to examine the effects 
of various covariants (age, gender, degree of hearing loss, duration of hearing loss, 
duration of hearing aid use, and etiology of hearing loss) on SRT data. Paired 
comparison t-tests were conducted to compare group differences in age, duration of 
hearing loss, and duration of hearing aid use, while chi-square analyses were conducted 
to compare group differences in gender and etiology of hearing loss. 

Demographic Profile 

Data obtained from the subject history questionnaire for the two groups 
(symmetrical and asymmetrical) are shown in Table 4-1. As can be seen, the 
demographic profile shows that the two groups are similar in each category: age, gender, 
duration of hearing loss, duration of hearing aid use, and etiology of hearing loss. 

63 



64 



Table 4-1. Summary of Demographic Data for Symmetrical Group and 

Asymmetrical Group 







Hearing 


Loss Group 






Symmetrical Subjects 
(N=16) 


Asymmetrical Subjects 
(N=16) 








Age (years) 
Mean 




61.9 


63.0 


SD 




9.1 


8.0 


Gender (N) 
Male 
Female 




14 (87.5%) 
2 (12.5%) 


14 (87.5%) 
2 (12.5%) 


Duration of Hearing Loss 
Mean 


(years) 


22.4 


28.3 


SD 




11.9 


16.0 


Duration of Hearing Aid Use 
(years) 
Mean 


3.5 


1.6 


SD 




• 5.8 


2.3 


Etiology (N) 
Noise 
Genetic 
Ototoxicity 
Unknown 
Noise and Genetic 




13(81.25%) 
1 (6.25%) 
1 (6.25%) 
1 (6.25%) 
(0.00%) 


13(81.25%) 
(0.00%) 

(0.00%) 
2 (12.5%) 

1 (6.25%) 



(SD = Standard Deviation). 



65 

Specifically, the mean age for the symmetrical group was 61.9 years (SD = 9.1) 
while the mean age for the asymmetrical group was 63.0 years (SD = 8.0). The 
difference in mean age between the two groups was 1 . 1 years. Both groups exhibited the 
same number of males (N = 14) and females (N = 2). The mean duration of hearing loss 
for the symmetrical group was 22.4 years (SD = 1 1.9) and the mean duration of hearing 
loss for the asymmetrical group was 28.3 years (SD - 16.0), with a difference between 
groups of 5.9 years. The mean duration of hearing aid use was 3.5 years for the 
symmetrical group (SD = 5.8) and 1 .6 years for the asymmetrical group (SD = 2.3). The 
difference in mean duration of hearing aid use between the two groups was 1.9 years. 
For suspected cause of hearing loss, over 80% of the individuals in each group indicated 
exposure to loud noise as the primary cause for their hearing loss. This is not surprising 
as 12 of the 16 subjects (75%) in the symmetrical group and 13 of the 16 subjects (81%) 
in the asymmetrical group reported prior military noise exposure on the case history 
questionnaire. 

Paired comparison t-test analyses were used to evaluate the differences between 
the two groups for the continuous data sets of age, duration of hearing loss, and duration 
of hearing aid use. No statistically significant differences were found for any of these 
data sets (p > .05). Specifically, there was no statistically significant difference between 
groups for age (p=0.7281), duration of hearing loss (p=0.2514), or duration of hearing aid 
use (p=0.2248). Categorical data, gender and etiology, were evaluated using chi-square 
procedures. These analyses also showed no statistically significant difference between 
groups for gender (p=l .000) or for etiology (p=l .000). Overall, these analyses suggest 
that the demographic profiles between the two groups were relatively homogenous. 



66 

Hearing Thresholds 

Mean hearing thresholds for each of the frequencies tested are shown by BE and 
PE for both the symmetrical and the asymmetrical groups in Table 4-2. Figure 4-1 shows 
the audiograms by group for this same data. In addition, Table 4-2 shows the average 
threshold difference (degree of asymmetry) between ears for each group. These data 
reveal several pertinent trends with regard to the two subject groups. First, the average 
audiometric thresholds observed in the symmetrical group demonstrate symmetrical 
hearing between ears +/- 4 dB at each frequency tested. Second, the average audiometric 
thresholds for the asymmetrical group show an asymmetry in hearing between BE and PE 
of 6.0 dB at 1000 Hz, 15.0 dB at 2000 Hz, 13.8 dB at 3000 Hz, 15.0 dB at 4000 Hz, 12.5 
dB at 6000 Hz, and 8.4 dB at 8000 Hz. Third, these data demonstrate that the BE of the 
asymmetry group was matched within 5 dB of the BE for the symmetry group. Fourth, 
both groups showed normal hearing (< 25 dB HL) in each ear at 250, 500, and 1000 Hz. 
Fifth, the configuration of the average hearing loss for either ear of the symmetry group 
and the BE of the asymmetry showed a mild to moderately-severe hearing loss from 2000 
to 8000 Hz, while the PE of the asymmetry group showed an average moderate to severe 
hearing loss from 2000 to 8000 Hz. 

Table 4-3 shows the pure tone average of 1000, 2000, 3000, 4000, 6000, and 8000 
Hz (PTAiooo-8ooohz) for the BE and PE for each group as well as the PTAi oo-8ooohz 
difference between ears for each group. These data illustrate that the mean PTA ]0 oo-8ooohz 
difference between ears for the symmetrical group was 2.3 dB and the mean PTAiooo- 
8ooohz difference between ears for the asymmetrical group was 1 1 .8 dB. Furthermore, the 
mean PTAiooo-sooohz difference between the BE of the symmetrical group and the BE of 



67 



Table 4-2. Summary of Hearing Thresholds (dB HL) by Frequency for 
Symmetrical Subjects and Asymmetrical Subjects 



Symmetrical 
Subjects 


250 
Hz 


500 
Hz 


1 
kHz 


2 
kHz 


3 
kHz 


4 
kHz 


6 
kHz 


8 
kHz 


BE 


12.8 


12.5 


18.8 


30.0 


48.4 


56.3 


60.3 


65.3 


PE 


12.8 


12.2 


20.0 


32.5 


50.3 


59.1 


64.4 


67.2 


Degree of 
Asymmetry 


0.0 


0.3 


1.2 


2.2 


1.9 


2.8 


4.1 


1.9 






Asymmetrical 
Subjects 


250 
Hz 


500 
Hz 


1 
kHz 


2 
kHz 


3 
kHz 


4 
kHz 


6 
kHz 


8 
kHz 


BE 


10.3 


10.3 


14.1 


32.2 


52.5 


58.8 


64.4 


68.1 


PE 


10.9 


10.6 


20.3 


47.2 


66.3 


73.8 


76.9 


76.6 


Degree of 
Asymmetry 


0.6 


0.3 


6.3 


15.0 


13.8 


15.0 


12.5 


8.4 



(BE = Better Ear and PE = Poorer Ear). 



68 



Frequency (Hz) 
2k 3k 




(a) 



250 



500 



Frequency (Hz) 
Ik 2k 3k 



4k 



6k 




(b) 



Figure 4-1. Average Audiogram for Symmetrical Subjects and Asymmetrical 
Subjects: (a) Symmetrical Subjects, (b) Asymmetrical Subjects 



69 



Table 4-3. Mean PTAiooo-8ooohz (dB HL), Standard Deviation, and Degree of 
Asymmetry for Better Ear and Poorer Ear and Difference in Mean PTAiooo sooohz 

Within and Between Groups 





Symmetrical 


Asymmetrical 


Difference in Mean 




Subjects 


Subjects 


PTAi000-8000Hz 

Between Groups 


BE 








Mean 


46.6 


48.3 


1.7 


SD 


8.1 


5.8 




PE 








Mean 


48.9 


60.1 


11.2 


SD 


8.4 


4.3 




Degree of Asymmetry 


2.3 


11.8 




Between Ears for PTAiooo- 








8000Hz 









Standard Deviation (SD), Better Ear (BE), Poorer Ear (PE). 






70 

the asymmetrical group was 1.7 dB while the mean PTAiooo-8ooohz difference between the 
PE of the symmetrical group and the PE of the asymmetrical group was 1 1 .2 dB. 

Speech Reception Threshold (SRT) 

Individual and mean speech reception threshold (SRT) scores obtained for the two 
groups are presented in Table 4-4 for the symmetrical subjects and in Table 4-5 for the 
asymmetrical subjects. Mean BE SRTs for the two groups are presented in Figure 4-2, 
mean MID SRTs for the two groups are presented in Figure 4-3, and mean PE SRTs for 
the two groups are presented in Figure 4-4. Figure 4-5 presents mean SRTs for both 
groups by each condition. In general, these data show that the two groups obtained 
similar SRTs in the BE condition (symmetrical BE mean SRT = 64.3 dB SPL, SD * 2.4; 
asymmetrical BE mean SRT = 65.3 dB SPL, SD = 1.3) and the MID condition 
(symmetrical MID mean SRT = 69.6 dB SPL, SD = 2.8; asymmetrical MID mean SRT = 
71.2 dB SPL, SD = 2.3). However, the asymmetrical group had poorer speech-perception 
ability in the PE condition (mean PE SRT = 69.3, SD = 2.8) than the symmetrical group 
in the PE condition (PE mean SRT = 65.0, SD = 2.9). Furthermore, these data show that 
subjects in both groups achieved better (lower) SRTs in the BE and PE conditions than in 
the MTD condition. 

A two-factor repeated measures ANOVA test procedure was used to examine 
differences in SRT performance for Groups and Conditions. This is a fully randomized 
design in which each subject underwent each condition. The two factors were Group 
(symmetrical and asymmetrical) and Condition (BE, MID, PE). The ANOVA revealed 
statistically significant differences for Group (F(i, 30 ) = 9.49, p = .0044), Condition (F (2 ,60) 
- 100.26, p = .0001), and Group by Condition (F (2 , 6 o) = 10.19, p = .0002). These data are 
shown in Table 4-6. Because significant interaction was found in the Group by 






71 



Table 4-4. Mean Speech Reception Thresholds (SRT) for Symmetrical Subjects 



Subject 


SRT (dB SPL) 




Better Ear 


Midline 


Poorer Ear 


SI 


61.6 


69.4 


60.0 


S2 


64.9 


70.6 


68.0 


S3 


60.9 


64.5 


62.1 


S4 


62.4 


67.3 


62.6 


S5 


63.5 


69.6 


65.9 


S6 


63.3 


68.9 


62.4 


S7 


69.4 


74.8 


70.6 


S8 


63.8 


71.1 


66.7 


S9 


64.9 


73.6 


69.2 


S10 


68.2 


71.5 


67.5 


Sll 


62.4 


68.9 


63.3 


S12 


66.8 


67.1 


65.2 


S13 


67.1 


73.4 


66.6 


S14 


63.5 


67.5 


62.8 


S15 


63.5 


69.4 


63.5 


S16 


62.8 


6.66 


63.1 


Average 


64.3 


69.6 


65.0 


Standard Deviation 


2.4 


2.8 


2.9 



Mean SRT data in dB SPL. 



72 



Table 4-5. Mean Speech Reception Thresholds (SRT) for Asymmetrical Subjects 



Subject 


SRT (dB SPL) 




Better Ear 


Midline 


Poorer Ear 


Al 


64.5 


70.8 


69.9 


A2 


66.1 


69.6 


70.6 


A3 


65.6 


73.4 


70.4 


A4 


65.2 


70.4 


72.0 


A5 


63.5 


68.9 


68.5 


A6 


63.5 


69.9 


63.3 


A7 


67.3 


70.4 


74.8 


A8 


64.0 


68.5 


66.4 


A9 


64.2 


69.2 


67.5 


A10 


67.3 


72.9 


70.8 


All 


65.2 


69.9 


73.2 


A12 


65.6 


73.9 


68.5 


A13 


63.5 


70.8 


67.1 


A14 


66.1 


77.2 


70.4 


A15 


65.9 


71.1 


67.1 


A16 


67.3 


72.5 


68.7 


Average 


65.3 


71.2 


69.3 


Standard Deviation 


1.3 


2.3 


2.8 



Mean SRT data in dB SPL. 



73 



74.0 



72.0 



70.0 



68.0 



Oh 

on 

pq 66.0 



on 

s 



64.0 



62.0 



60.0 



58.0 



56.0 4 



i 



■■»«-*_— 




Symmetrical Subjects 



Asymmetrical Subjects 



(* Not significant, p>05) 



Figure 4-2. Mean SRT Data for the Better Ear Condition 



74 



-J 

Ph 

CQ 
1/3 



u 



72 - 


* 












70 






. 




























68 - 






66 








64 










62 










60 








58 






























1 



Symmetrical Subjects 



Asymmetrical Subjects 



(* Not significant, p>05) 



Figure 4-3. Mean SRT Data from the Midline Condition 






75 



— 

C/3 



2 



72.0 - 


* 








70 








68.0 - 




r-^^—^— 
















66.0 - 










64 










































62 - 












60 




























: : : : : ;.■ : :::: : ■'■ ' ' :■>:>■; 








58 
























m ■ ■■ 








56.0 -1 











Symmetrical Subjects 



Asymmetrical Subjects 



(* Significant, p<05) 



Figure 4-4. Mean SRT Data from the Poorer Ear Condition 



76 



a, 
in 

9 

& 



74.0 -if 



72.0 



70.0 



68.0 



66.0 



64.0 



62.0 



60.0 



58.0 



56.0 




Better Ear Midline 

Experimental Condition 



Poorer Ear 



Figure 4-5. Mean SRT Data from Each Experimental Condition 



77 



Table 4-6. Repeated Measures Analysis of Variance 



Source 


DF 


F 


Pr>F 


Group 


1,30 


9.49 


0.0044 


Condition 


2,60 


100.26 


0.0001 


Group*Condition 


2,60 


10.19 


0.0002 



78 

Condition analysis, the analyses of the main effects cannot be discussed independently 
from each other. Therefore, the simple effects will be addressed in greater detail in 
Chapter 5. 

Post-hoc multiple mean comparison analyses, using the Bonferroni test, revealed 



that: 



1 . The 0.7-dB difference between the BE SRT and the PE SRT for the 
symmetrical group was not statistically significant (p = 0.2490) 

2. The 5.3-dB difference between BE SRT and MID SRT for the symmetrical 
group was statistically significant (p = 0.0001) 

3 . The 4.6-dB difference between PE SRT and MID SRT for the symmetrical 
group was statistically significant (p = 0.0001) 

4. The 4.0-dB difference between BE SRT and PE SRT for the asymmetrical 
group was statistically significant (p = 0.0001) 

5. The 5.9-dB difference between BE SRT and MID SRT for the asymmetrical 
group was statistically significant (p = 0.0001) 

6. The 1 9-dB difference between PE SRT and MID SRT for the asymmetrical 
group was statistically significant (p = 0.0014) 

7. The 1.0-dB difference between symmetrical BE SRT and asymmetrical BE 
SRT was not statistically significant (p = 0.2657) 

8. The 1 6-dB difference between symmetrical MID SRT and asymmetrical MID 
SRT was not statistically significant (p = 0.0782) 

9. The 4.3-dB difference between symmetrical PE SRT and asymmetrical PE 
SRT was statistically significant (p = 0.0001). 



79 

Because the SRT in noise may be confounded by other factors (age, gender, 
degree of hearing loss, duration of hearing loss, duration of hearing aid use, and etiology 
of hearing loss) a series of ANCOVA procedures were conducted on the entire sample. 
In specific, three ANCOVA analyses were conducted using BE SRT, MID SRT, and PE 
SRT for both groups (symmetrical and asymmetrical) as dependent variables. The 
independent variables in the ANCOVA procedures were degree of asymmetry in hearing, 
age, gender, BE PTAiooo-sooohz, PE PTAiooo-sooohz, duration of hearing loss, duration of 
hearing aid use, and etiology of hearing loss. The ANCOVA procedures found 
significance in only one analysis. Specifically, PE PTAiooo-sooohz showed a statistically 
significant influence (p - 0.0207) when PE SRT was used as the dependent variable. 

Summary of Results 
In summary, statistical analyses of the data revealed the following results: 
1 Statistically similar demographic profiles for age, gender, duration of hearing 
loss, duration of hearing aid use, and etiology of hearing loss between groups 

2. Similar audiometric configurations for the BE and PE of the symmetrical 
group 

3. Essentially equivalent audiograms between the BE of the symmetrical group 
and the BE of the asymmetrical group 

4. An asymmetry in hearing between the BE and the PE for the asymmetrical 
group from 1000 to 8000 Hz 

5. No statistically significant difference between BE SRT and PE SRT for the 
symmetrical group 

6. Statistically significant differences between BE SRT/PE SRT and MID SRT 
for the symmetrical group 






80 

7. Statistically significant differences between either the PE SRT or the MID 
SRT and the BE SRT for the asymmetrical group 

8. A statistically significant difference between PE SRT and MID SRT for the 
asymmetrical group 

9. No statistically significant difference in BE SRTs between groups 

10. No statistically significant difference in MID SRTs between groups 

1 1 . A statistically significant difference in PE SRTs between groups 

12. A statistically significant influence of PE degree of hearing loss on the PE 
SRT for subjects with asymmetrical SNHL. 






CHAPTER 5 
DISCUSSION 

The purpose of this investigation was to determine if individuals with 

asymmetrical SNHL exhibit different speech-perception abilities than individuals with 

symmetrical SNHL in a noisy listening environment. Measures of speech-perception 

ability using HINT sentences in a background of speech-spectrum noise were obtained 

from three different azimuths: speech at 0° and noise at 180°, speech at 45° and noise at 

225°, and speech at 3 15° and noise at 135°. An adaptive procedure was used to obtain 

SRT data (in dB SPL) for each subject under each condition. The SRT was defined as 

the average dB SPL at which 50% of the HINT sentences were repeated correctly. A 

two-factor repeated measures ANOVA was used to examine differences between group 

and condition. In addition, an ANCOVA was used to determine the influence of various 

factors (age, gender, degree of hearing loss, duration of hearing loss, duration of hearing 

aid use, and etiology of hearing loss) on the SRT. Results indicated significant group and 

condition effects. Moreover, a significant group by condition interaction was found. Due 

to the significant group by condition interaction, main effects (group and condition) 

cannot be discussed independently. For ease of presentation, the following discussion 

will address the relevant comparisons of simple effects under separate categories. 

Clinical/military implications, limitations, and future directions of this research will also 

be addressed. 



81 



82 

Group by Condition Comparisons 
Better-Ear vs. Poorer-Ear Symmetry Group 

For the subjects with symmetrical SNHL, there was no statistically significant 
difference in speech-perception ability between the BE condition and the PE condition. 
Specifically, the SRT for the BE was 64.3 dB and the SRT for the PE was 65.0 dB. This 
finding was not surprising as the two ears exhibited essentially equivalent hearing 
thresholds (mean PTAiooo-sooo Hz difference between BE and PE = 2.4 dB HL). These data 
are consistent with previous investigations (e.g., Humes and Roberts, 1990; Jerger, 
Jerger, and Pirozzolo, 1991; Levitt, 1982; Miller and Nicely, 1955) which have 
demonstrated that individuals with similar audiometric thresholds exhibit similar speech- 
perception ability in noise. It should be noted, however, that additional studies have not 
shown a strong relation between audiometric thresholds and speech-perception ability in 
noise (e.g., Crandell, and Needleman, 1999; Levitt, 1982; Needleman and Crandell, 1992, 
1993a, b; Plomp, 1986; Walden, 1984; Walden, Prosek, and Worthington, 1975). It 
should also be noted that to date, no research has specifically compared speech- 
perception ability in noise between ears in individuals with similar audiograms. It is 
reasonable, however, to expect that individuals with symmetrical SNHL will exhibit 
similar SRTs across ears as the attenuational (i.e., hearing loss) and distortional 
characteristics (i.e., temporal resolution, frequency selectivity, frequency discrimination, 
and intensity discrimination) in each ear would be expected to be similar (Crandell, 
1991). 
Better-Ear/Poorer-Ear vs. Midline Symmetry Group 

A second result of this study was that both BE and PE conditions in the 
symmetrical group showed statistically significant improvements in speech-perception 



83 

ability over the MID condition of 5.3 dB and 4.6 dB, respectively. These findings may 
initially seem surprising; however, at least three phenomena could be responsible for 
these results. These phenomena are: head shadow effects, decreases in speaker-listener 
distance within the direct sound field, and release from masking. 

Recall that head shadow is the decrease in the intensity of the acoustical signal as 
sound travels from one side of the head to the other. Thus, if the head were turned 45°, 
the noise would be less intense at the near ear (NE) in the +/- 45° condition than at the 
same ear in the MID condition. This would create a more favorable SNR at the NE and 
account for, in part, the improved SRT in the +/- 45° condition. The magnitude of the 
improvement in the SRT from MID to +/- 45° found in this experiment (5 dB) is in 
general agreement with other studies. In specific, the present findings are greater than the 
3-dB head shadow effect found by Carhart (1965a), Harris (1965), and Nordlund and 
Frtizell (1963), but is less than the 5.7 dB found by Olsen (1965) and the 6.4 dB found by 
Tillman et al. (1963). Discrepancies between this investigation and other studies may 
exist due to differences in experimental paradigms. Specifically, no other published 
study has used the same off-midline azimuths for both speech and noise presentations 
(45° to 225° and 3 15° to 135°) as were used in the present study. Additionally, the signal 
azimuths in this study were achieved by rotating the subject between a pair of speakers at 
various locations while the subject remained in the same position. Furthermore, these 
discrepancies may be caused by differences in past investigations in such variables as 
subject hearing thresholds, speech stimuli, and types/levels of competing noise. For 
example, Tillman et al., (1963) used spondee words in quiet listening conditions with the 
speaker set off-midline at 45°. 



84 

Only one unpublished study has compared speech perception from a 45° to 225° 
azimuth (speech and noise, respectively) presentation against a 0° to 180° azimuth 
(speech and noise, respectively) presentation by repositioning the head (Boney, 1987). 
However, to achieve these orientations, the speech and noise signals were recorded via a 
KEMAR (Knowles Electronic Manikin for Acoustic Research) under various conditions 
and played back under earphones to 16 subjects with normal hearing. In specific, SPIN 
(Speech Perception in Noise) sentences (Kalikow, Stevens, and Elliott, 1977) were 
recorded at azimuths of either 0° or 45° while the noise (multitalker babble) was recorded 
at azimuths of 180° (for speech at 0°) and 225° (for speech at 45°). Data from this 
investigation showed an average 40% improvement in sentential speech perception as the 
angle of presentation for speech and noise was rotated from 0° to 180° to 45° to 225°. 
Forty percent improvement in speech perception is equivalent to an approximate 5-dB 
reduction in SRT for SPIN sentences. In specific, past investigations have suggested that 
each dB SRT for the SPIN sentences equates to approximately 7% to 9% (Crandell, 
1991). 

Another possible factor to explain the improved SRT in the +/- 45° conditions is 
the fact that the NE is closer to the speaker within the direct sound field. By turning the 
head, the listener has moved the NE closer to the speech source by an estimated 1 to 2 
inches (approximately 2.5cm to 5cm). Because speech perception was assessed in the 
direct sound field (see Chapter 3) a slight decrease in distance could account for a 
relatively large change in the dB SPL of the signal reaching that ear (Crandell and 
Smaldino, 2000). To verify this assumption, a SLM was used to measure the intensity of 
the HINT calibration signal at a position 2 inches in front of the midpoint of the subject's 



85 

head (with the subject absent). The SLM reading was 71.5 dB SPL. Recall that the 
intensity of the same signal at the midpoint of the subject's head (1 meter from the 
speaker) was 70 dB SPL. Therefore, a decrease of 2 inches in the direct sound field 
could potentially account for approximately a 1.5-dB improvement in the SRT. 

Another variable that could contribute to this improvement in the SRT as the head 
rotates from 0° to +/- 45° is the change in the signal characteristics of the target signal 
and background noise at the ears. Presumably, with head rotation, these would be 
changes in phase and level differences of the signal at the two ears that are associated 
with unmasking or the release from masking phenomenon (Bess and Tharpe, 1986; 
Boney, 1987; Bronkhorst and Plomp, 1988, 1989; Hawkins, 1986; Konkle and Schwartz, 
1981; Moore, 1996). Recall from Chapter 2 that release from masking is the 
improvement in hearing that occurs when the phase and intensity characteristics of 
speech and noise are different at the two ears (Agnew, 2000; Bronkhorst and Plomp, 
1988, 1989; Konkle and Schwartz, 1981; Moore, 1996). Theoretically, with the speech 
signal at 0° and the competing noise at 180° the phase and level of the two signals would 
be essentially equivalent at the listener's ears (Boney, 1987). However, when the head 
rotates so that the two signals are presented off-midline (e.g., speech at 45° and noise at 
225°) it is likely that differences will arise in the phase and intensity levels of the signals 
at the listener's ears. This disparity in signal characteristics at the two ears could result in 
unmasking and therefore improve the intelligibility of the speech signal by as much as 4 
to 7 dB (Carhart et al., 1967; Levitt and Rabiner, 1967a, b; Schubert and Schultz, 1962). 
However, the application of this hypothesis to the present study is speculative, as 
temporal and level characteristics of sound can only be accurately controlled and 



86 

accounted for in a laboratory setting under earphone listening conditions (Bess and 
Tharpe, 1986). Therefore, it would be difficult to determine how much of the 
improvement in the SRT seen in the +/- 45° orientations was due to changes in phase and 
level characteristics of the signals at the two ears. 
Better-Ear vs. Poorer-Ear Asymmetry Group 

For the asymmetrical group, the speech-perception ability in the BE condition was 
significantly better than in the PE condition by 4.0 dB. While few studies have directly 
looked at the effects of asymmetrical SNHL on speech perception, the present data are 
relatively consistent with past research involving subjects with real or simulated (i.e., ear 
occluded with a plug or muff) unilateral hearing loss (Carhart, 1965a; Harris, 1965; 
Nordlund and Fritzell, 1963; Olsen, 1965; Olsen and Carhart, 1967; Tillman et al., 1963). 
An overview of these studies suggests an improvement in speech-perception ability 
between 3 and 6 dB when the BE is toward the speech signal and the PE is toward the 
noise. Olsen and Carhart (1967), for example, demonstrated an improvement of 36% 
(approximately 6 dB change in SNR for the stimuli used) by occluding one ear of 
subjects and orienting the good ear toward the speech. 

It seems reasonable to hypothesize that the SRT difference between the BE and 
the PE condition was due, in part, to the threshold differences between the two ears. 
Recall that audiologic testing indicated an 1 1.8-dB difference between BE PTAiooo-sooohz 
and PE PTAiooo-sooohz in the asymmetry group. Past studies have suggested 
approximately a 1 to 2-dB reduction in SRT for a 10-dB decrease in hearing threshold 
sensitivity for listeners with mild to moderately-severe SNHL (Killion, 1997; Plomp, 
1986). Thus, it is possible that the threshold difference between the ears could account 
for some of the SRT difference between ears (4.0 dB). The influence of threshold 



87 

differences on the SRT was also supported by the ANCOVA analyses which indicated 
that only the PE threshold values contributed to the SRT. Interestingly, past 
investigations have suggested that an individual with unilateral or asymmetrical hearing 
loss naturally attempts to maximize his or her ability to hear by placing the BE toward the 
speaker and any interfering noise toward the opposite ear (Harris, 1965; Nabelek and 
Mason, 1981). 

In addition to the threshold difference between the BE and the PE of the 
asymmetrical group, it is also possible that the PE of the asymmetrical subjects exhibited 
a greater degree of distortional characteristics than the BE. Specifically, a number of 
investigations have demonstrated a significant relationship between distortional factors 
such as frequency selectivity, frequency discrimination, temporal resolution, and/or 
intensity discrimination and degree/configuration of SNHL (see Crandell, 1991 and 
Plomp, 1986 for a review of these articles). Certainly, additional study would be needed 
to verify this hypothesis. 
Better-Ear vs. Midline Asymmetry Group 

Another finding within the asymmetrical group was that the BE SRT was 
significantly better than the MID SRT by 5.9 dB. As with the symmetrical group the 
finding that speech perception is better when the BE is positioned forward was somewhat 
unexpected. The rational for this improvement, however, appears to be the same as for 
the improvement in SRT from the MID condition to the BE condition for symmetrical 
group (5.3 dB). That is, phenomena such as the head shadow effect, decrease in speaker- 
listener distance within the direct sound field, and release from masking augmented 
speech-perception in noise. Specifically, when the BE of the asymmetrical subject was 
turned toward the speech speaker, that ear had a SNR advantage over the same ear in the 



88 

MID condition due to the head blocking the noise coming from behind the subject. The 
magnitude of this SRT improvement (5.9 dB) is consistent with previously noted data 
showing a 3- to 6.4-dB head shadow effect (Carhart, 1965a; Harris, 1965; Nordlund and 
Frtizell, 1963; Olsen, 1965; Tillman et al., 1963) and the data obtained from the BE of the 
subjects with symmetrical hearing loss in this study. The advantage of decreasing 
speaker-listener distance within the direct sound field for the subjects with asymmetrical 
hearing impairment would be the same as for the subjects with symmetrical hearing 
impairment. That is, as the head rotates toward the speech speaker, the distance from the 
speech signal to the ear is reduced and the signal reaches the ear at a more favorable 
SNR. Finally, when the head turns from 0° to +/- 45° there may be a difference in 
temporal and intensity characteristics of the speech and noise signals at the two ears. 
However, as noted previously, changes in phase and intensity can only be controlled and 
measured in a laboratory under earphone listening conditions. 
Poorer-Ear vs. Midline Asymmetry Group 

Lastly, the statistical analyses for the asymmetrical group showed that there was a 
significant difference in speech-perception ability between the PE condition and the MID 
condition. Specifically, subjects with asymmetrical hearing loss obtained 1.9 dB better 
SRTs when their PE was facing the speaker than when they were positioned directly 
facing the speaker. This would indicate that individuals with a relatively mild degree of 
asymmetry between ears still derive some benefit from the enhanced SNR when listening 
primarily from the PE in a noisy environment. Stated otherwise, the listener with 
asymmetrical hearing loss is still able to detect and take advantage of, in part, some of the 
effects of the three factors noted previously: head shadow effect on noise, reduction in 
speaker-listener distance, and release from masking. Based on the present investigation, 



89 

it remains unclear what degree of asymmetry would be necessary before these 
phenomena would not be beneficial. 

Although a 1.9-dB improvement in SRT from PE to MID conditions may initially 
appear inconsequential, Nilsson et al. (1992) demonstrated that for the HINT sentences 
there is a 10% change in speech perception in noise for every 1 dB change in the SRT. 
This suggests that individuals with mild asymmetrical SNHL may be able to improve 
their speech perception in noise by as much as 19% by placing the PE toward the speaker 
instead of directly facing the speaker. Obviously, this recommendation is only relevant 
when the individual with asymmetrical hearing impairment cannot position their BE 
toward the speaker and position the noise behind them. 
Better-Ear Symmetry Group vs. Better-Ear Asymmetry Group 

Comparing the two groups, there was no statistically significant difference in 
speech-perception ability between the BE of the symmetrical subjects and the BE of the 
asymmetrical subjects. Recall that both groups were matched for degree of SNHL (1.7- 
dB PTAiooo-8ooohz difference). As noted previously, a number of past studies have shown 
that individuals with similar hearing thresholds can be expected to yield similar SRTs 
(e.g., Jerger, et al., 1991; Levitt, 1982; Miller and Nicely, 1955). As no other studies had 
similar paradigms (i.e., subjects with symmetrical and asymmetrical hearing impairment 
in an off-midline speech and noise orientation), there is no data against which to directly 
compare these current results. 
Midline Symmetry Group vs. Midline Asymmetry Group 

There also was no statistically significant difference in speech-perception ability 
when comparing the MID conditions between the two groups (1.6-dB difference). This 
lack of statistical difference between the two groups in the MID condition is interesting 



90 

as one may expect that the asymmetrical group would have obtained poorer MID speech- 
perception scores due to reductions in binaural processing. The lack of difference in 
MID speech-perception scores between the two groups would suggest that the mild 
degree of asymmetry exhibited by the subjects in the asymmetrical group was not 
sufficient to create further decreases in the speech-perception ability compared to the 
symmetrical group. 

It should be noted, however, that although the difference between groups for the 
MID condition was not statistically significant, individuals with asymmetrical SNHL 
showed a trend toward poorer speech-perception than individuals with asymmetrical 
SNHL. Data from Nilsson et al. (1992) would suggest that this trend could be as much as 
a 16% disadvantage for individuals with asymmetrical SNHL 
Poorer-Ear Symmetry Group vs. Poorer-Ear Asymmetry Group 

The PE of the asymmetrical group had significantly poorer speech-perception 
ability than the PE of the symmetrical group by 4.3 dB. Recall that the hearing 
thresholds for the PE between the two groups differed by 1 1.2 dB (difference in PTAiooo- 
sooo Hz). Thus, based on previously discussed research by Killion (1997) and Plomp 
(1986), it would be reasonable to expect that the degree of hearing loss in the PE of the 
asymmetrical group would account for much of the decrease in the SRT for the 
asymmetrical group over the symmetrical group for the PE condition. As noted 
previously, the statistical analysis showed that degree of hearing loss in the PE had a 
statistically significant influence on the SRT for the PE. Therefore, it can be concluded 
that the reduced threshold for the PE in the asymmetrical group was, at least in part, 
responsible for the drop in the SRT observed for the PE of the same group. 



91 

Summary 

These data suggest that individuals with mild to moderately-severe sloping 
symmetrical SNHL have markedly improved speech-perception ability in noise when the 
speaker they are listening to is positioned off-midline at +/- 45° rather than when they are 
directly facing the speaker. For individuals with mild to moderately-severe sloping 
SNHL, with a mild asymmetry in hearing between ears, the data show also an 
improvement in speech-perception ability when the speaker is positioned off-midline at 
45° toward the BE rather than when they are directly facing the speaker. Furthermore, 
these data show that even if individuals with asymmetrical hearing loss cannot place the 
speaker toward the BE, speech-perception ability can still be significantly improved from 
MID by placing the speaker at 45° toward the PE. 

These data also show that individuals with asymmetrical hearing loss and 
individuals with similar symmetrical SNHL, have equivalent speech-perception ability 
when speech is oriented 45° toward the BE and competing noise is oriented toward the 
PE. Furthermore, these groups have equivalent speech-perception ability when speech is 
coming from directly in front of the listener and noise is coming from behind the listener 
(0°-180°). However, due to the decreased hearing thresholds in the PE for individuals 
with asymmetrical SNHL, individuals with asymmetrical SNHL have significantly poorer 
speech-perception ability than individuals with symmetrical SNHL when speech is 
oriented 45° off-midline toward the PE and noise is oriented 45° off-midline toward the 
BE. 

Clinical/Military Implications 
The present study has important clinical implications, particularly in the area of 
auditory rehabilitation and counseling of individuals with SNHL. For example, these 



92 



data suggest that it may be better to advise the individual with symmetrical or 
asymmetrical hearing loss to position their BE toward the speaker to maximize their 
speech-perception ability in noise. These data also suggest that it may be beneficial to 
counsel the individual with asymmetrical hearing impairment to place their PE toward the 
speech if placing the BE toward the speaker is not possible. This suggestion may be 
contrary to the more common recommendation that audiologists provide individuals with 
SNHL: that is, that they face a speaker directly while placing the noise behind them. Of 
course, the recommendation of turning the head while listening in noise can only be made 
for the specific signal/noise orientations (and hearing loss) examined in this study. 
Overall, these data are consistent with past investigators that have indicated that the 
primary advantage of hearing with two ears is the increased likelihood of always being 
able to position one ear toward the speech signal (Carhart, Pollock and Lotterman, 1963; 
Olsen and Carhart, 1967). 

Mission success in a military operation has been shown to be directly linked to 
efficacy of speech communication. Specifically, Garinther and Peters (1990) 
demonstrated that a 42% improvement in speech communication (from 52% to 94%) for 
an armored tank crew resulted in an 8% increase in mission success (i.e., successfully 
destroying the enemy target). Therefore, a 5 dB advantage achieved by directing speech 
communication to either ear (over midline presentation for an individual with 
symmetrical SNHL) or a 6 dB advantage achieved by directing speech communication to 
the BE (over midline for an individual with asymmetrical SNHL) could result in a 
substantial improvement in speech communication and mission success on the battlefield. 



93 

The differences in speech-perception ability between individuals with 
symmetrical and asymmetrical hearing loss (i.e. PE toward the speaker) are concerning 
for individuals who routinely work in noisy environments, such as in industry or the 
military. To explain, individuals with asymmetrical hearing impairment may place 
themselves and/or others at increased risk of danger because of their inability to detect 
and/or understand speech or auditory warning signals emanating from the side of the PE. 
For example, Army Regulation (AR) 40-501 (AR 40-501, 1998) allows a soldier with 
asymmetry in hearing to remain on active duty in a "combat fit" status. Yet, the results 
from this study would suggest that such an individual may be seriously disadvantaged in 
the confusion and noise of realistic training exercises and/or actual combat, particularly if 
the primary signal (i.e. voice commands during a live fire exercise) were oriented toward 
the PE. Hopefully, speech-perception tests that more accurately evaluate real world 
auditory handicap for individuals with varying degrees and configuration of hearing loss 
will be developed. 

Limitations of the Research 

Potential limitations of this study should be mentioned at this point. First, these 
data can only be generalized to individuals with the relatively restricted audiometric 
configuration examined in this study. Recall that individuals with symmetrical hearing 
loss exhibited mild to moderately-severe sloping SNHL. Moreover, individuals with 
asymmetrical hearing loss exhibited only a mild degree of threshold difference between 
ears. Thus, these data may not apply to individuals with differing degrees/configurations 
of hearing loss and/or asymmetry between ears. 

Furthermore, it should also be noted that this study used a +/- 45° azimuth for 
speech signal presentation. These azimuths were used to represent maximum head 



94 

rotation that would still allow eye contact with the speaker. Thus, these data may not be 
applicable for other speaker-listener orientations. Additionally, the impact of visual cues 
to auditory speech perception, while well studied under other conditions, is unknown 
under the conditions of this study. However, it should be noted that in many military 
operations, verbal communications under conditions of non- visual cues is not uncommon. 

Finally, as no hearing aids or assistive listening devices were used in this study, 
the results of this study cannot be applied to individuals with symmetrical or 
asymmetrical hearing loss who use hearing aids or assistive listening devices. However, 
based on the abundance of literature demonstrating the efficacy of amplification 
strategies for individuals with hearing impairment (e.g., Hawkins and Yacullo, 1984; 
Kollmeier and Peissig, 1990; Naidoo and Hawkins, 1997; Punch, Jenison, Allan, and 
Durrant, 1991) it is reasonable to suggest that these devices would greatly assist in 
overcoming the deficits encountered by individuals with asymmetrical hearing 
impairment. Specifically, hearing aids and/or assistive listening devices should reduce 
the degree of asymmetry in hearing thresholds and thus allow the individual to optimally 
use both ears. 

Future Directions 
The results of this study suggest several ideas for future research. First, 
individuals with differing degrees of SNHL, or differing degrees of asymmetry, need to 
be examined. As noted in the discussion section, it is plausible that individuals with 
more severe degrees of asymmetry may exhibit greater speech-perception difficulties 
than the subjects exhibited in this investigation. It would be particularly useful to 
determine what extent of asymmetry would completely diminish the binaural advantages 
for speech perception in noise. Second, speech perception in noise needs to be assessed 



95 

at different azimuths than were used in the present investigation. For example, 
comparing speech-perception scores at various angles not examined in this investigation 
would answer the question of how much head rotation is required to make a clinically 
relevant improvement in binaural speech perception in noise. Third, the contribution of 
visual perception, to auditory perception, at various azimuths needs to be evaluated. 
Fourth, similar investigations, with the use of hearing aids, need to be conducted to assess 
how such amplification systems contribute to reducing the deficits imposed by 
asymmetrical hearing impairment. Fifth, the application of virtual audiometry (i.e., a true 
dichotic presentation) under earphones needs to be explored. Sixth, the impact of 
protective headgear (in the military and industry) on the binaural advantage needs to be 
explored. Only through such future investigation can valid speech-perception tests be 
developed for individuals who are required to communicate in noisy listening 
environments. 






APPENDIX A 
INFORMED CONSENT FORM 




UNIVERSITY OF 

FLORIDA 



Department of Communication Sciences & Disorders 336 Dauer Hall 

PO Box 117420 

Gainesville, FL 32611-7420 

(352)392-2113 

Fax (352)846-0243 

Dear 



At present, various research projects are being conducted in the Department of 
Communication Processes and Disorders at the University of Florida to help us better 
understand how noise and/or hearing loss affect the speech understanding abilities of 
hearing-impaired individuals. I am a doctoral student at the University of Florida and 
would like to ask your cooperation in one such project: a study of how hearing that is 
unequal in the two ear affects speech perception in noise. 

If you consent to be a participant in this investigation, you will first be seen at the 
University of Florida for a hearing evaluation (pure-tone audiometry/immittance). Such 
tests are routinely done at the University of Florida to help us determine the degree and 
cause of a person's hearing loss. This evaluation will take approximately 60 minutes and 
will be provided to you at no cost whatsoever. If any untreated auditory pathology is 
detected, you will be referred to you physician. 

After this evaluation, you will be asked to listen to a series of sentences from the Hearing 
In Noise Test (HINT) in a background of noise and repeat them as best you can. Some of 
the sentences will be difficult for you to understand while others will be relatively easy. 
For the experimental testing you will be seated in a sound treated booth. All test material 
will come from two speakers, one situated in front of you and the other in back of you. 
To minimize head movement a neck brace, the type commonly used with neck injury 
patients, will be placed around your neck. 

Completion of this entire experiment will take approximately 2 hours, which will be 
completed in one session. You will be given a 5-10 minute break every 20-30 minutes if 
you so desire. If at anytime during the testing you express any desire to stop, additional 
break time will be given. Furthermore, if you indicate that you do not want to continue 
with the testing, all testing will be terminated. None of the above testing will cause any 
discomfort to you whatsoever. 

All test results and information obtained from this investigation will be kept anonymous. 
All forms and information pertaining to subjects will be coded by an identification 
number. Names of subjects will appear in a master roster to be kept only by myself. Upon 
completion of this investigation, the master roster will be destroyed. In all probability, 
there will be publications and presentations of the results of this study. Most scientific 
reports and publications present results such as these in statistical form, and in some 



97 



98 

instances, case studies are presented. In either case, your identity will be kept 
anonymous. 

You are free to withdraw your consent and discontinue participation at any time prior to 
completion of this investigation. Your participation or non-participation will not affect 
any treatment, services or course work you might be receiving at the University of 
Florida. None of the records obtained from this study will go into any University of 
Florida record. In addition, there will be no direct benefit to you from this study. 
However, you will receive the above mentioned hearing evaluation. 

I have read the information contained in this form and give my consent to participate in 
the research project outlined there. 

Signature of Subject: 

Name of Subject (please print): 

Address: 

Birthdate: 

Telephone #: 

Signature of Investigator: 

Signature of Witness: 



If you have any questions at all regarding this study, please feel free to contact me or the 
faculty supervisor at the telephone number or address below. Questions or concerns about 
the research participants' rights can be directed to the UFIRB PO Box 1 12250, University 
of Florida, Gainesville, Fl 3261 1-2250; phone (352) 392-0433 

Dale A. Ostler, MA., CCC-A Principal Investigator 

Carl C. Crandell, Ph.D., Faculty Supervisor 

Dept. of Communication Sciences and Disorders 

University of Florida 

461 Dauer Hall 

Gainesville, FL 32611 

(352)392-2041 



98 



99 



APPENDIX B 
HINT SENTENCE LISTS 



HINT List 1 



HINT List 3 



1. (A/the) boy fell from (a/the) window. 

2. (A/the) wife helped her husband. 

3. Big dogs can be dangerous. 

4. Her shoes (are/were) very dirty. 

5. (A/the) player lost a/the shoe. 

6. Somebody stole the money. 

7. (A/the) fire (is/was) very hot. 

8. She's drinking from her own cup. 

9. (A/the) picture came from (a/the) book. 

10. (A/the) car (is/was) going too fast. 

11. (A/the) boy ran down (a/the) path. 

12. Flowers grow in (a/the) garden. 

13. Strawberry jam (is/was) sweet. 

14. (a/the) shop closes for lunch. 

15. The police helped (a/the) driver. 

16. She looked in her mirror. 

17. (A/the) match fell on (a/the) floor. 

18. (A/the) fruit came in (a/the) box. 

19. He really scared his sister. 

20. (A/the) tub faucet (is/was) leaking. 

HINT List 2 

1. They heard (a/the) funny noise. 

2. He found his brother hiding. 

3. (A/the) dog played with (a/the) stick. 

4. (A/the) book tells (a/the) story. 

5. The matches (are/were) on (a/the) shelf. 

6. The milk (is/was) by (a/the) front door. 

7. (A/the) broom (is/was) in (a/the) corner. 

8. (A/the) new road (is/was) on (a/the) map. 

9. She lost her credit card. 

10. (A/the) team (is/was) playing well. 

11. (A/the) little boy left home. 

12. They're going out tonight. 

13. (A/the) cat jumped over (a/the) fence. 

14. He wore his yellow shirt 

15. (A/the) lady sits in her chair. 

16. He needs his vacation. 

17. She's washing her new silk dress. 

18. (A/the) cat drank from (a/the) saucer. 

19. Mother opened (a/the) drawer. 

20. (A/the) lady packed her bag. 



1. (A/the) boy did (a/the) handstand. 

2. They took some food outside. 

3. The young people (are/were) dancing. 

4. They waited for an hour. 

5. The shirts (are/were) in (a/the) closet. 

6. They watched (a/the) scary movie. 

7. The milk (is/was) in (a/the) pitcher. 

8. (A/the) truck drove up (a/the) road. 

9. (A/the) tall man tied his shoes. 

10. (A/the) letter fell on (a/the) floor. 

11. (A/the) silly boy (is/was) hiding. 

12. (A/the) dog growled at the neighbors. 

13. (A/the) tree fell on (a/the) house. 

14. Her husband brought some flowers. 

15. The children washed the plates. 

16. They went on vacation. 

17. Mother tied (a/the) string too tight. 

18. (A/the) mailman shut (a/the) gate. 

19. (A/the) grocer sells butter. 

20. (A/the) baby broke his cup. 

HINT List 4 

1. The cows (are/were) in (A/the) pasture. 

2. (A/the) dishcloth (is/was) soaking wet. 

3. They (have/had) some chocolate pudding. 

4. She spoke to her eldest son. 

5. (An/the) oven door (is/was) open. 

6. She's paying for her bread. 

7. My mother stirred her tea. 

8. He broke his leg again. 

9. (A/the) lady wore (a/the) coat. 

10. The cups (are/were) on (a/the) table. 

1 1. (A/the) ball bounced very high. 

12. Mother cut (a/the) birthday cake. 

13. (A/the) football game (is/was) over. 

14. She stood near (a/the) window. 

15. (A/the kitchen clock (is/was) wrong. 

16. The children helped their teacher. 

17. They carried some shopping bags. 

18. Someone (is/was) crossing (a/the) road. 

19. She uses her spoon to eat. 

20. (A/the) cat lay on (a/the) bed. 



99 



100 



HINT List 5 



HINT List 7 



1. School got out early today. 

2. (A/the) football hit (a/the) goalpost. 

3. (A/the) boy ran away from school. 

4. Sugar (is/was) very sweet. 

5. The two children (are/were) laughing. 

6. (A/the) fire truck (is/was) coming. 

7. Mother got (a/the) sauce pan. 

8. (A/the) baby wants his bottle. 

9. (A/the) ball broke (a/the) window. 

10. There (is/was) a bad train wreck. 

11. (A/the) boy broke (a/the) wooden fence. 

12. (An/the) angry man shouted. 

13. Yesterday he lost his hat. 

14. (A/the) nervous driver got lost. 

15. (A/the) cook (is/was) baking (a/the) cake. 

16. (A/the) chicken laid some eggs. 

17. (A/the) the fish swam in (a/the) pond. 

18. They met some friends at dinner. 

19. (A/the) man called the police. 

20. (A/the) truck made it up (a/the) hill. 



1 . She found her purse in (a/the) trash. 

2. (A/the) table (has/had) three legs. 

3. The children waved at (a/the) train. 

4. Her coat (is/was) on (a/the) chair. 

5. A/the) girl (is/was) fixing her dress. 

6. It's time to go to bed. 

7. Mother read the instructions. 

8. (A/the) dog (is/was) eating some meat. 

9. Father forgot the bread. 

10. (A/the) road goes up (a/the) hill. 

1 1. The fruit (is/was) on the ground. 

12. They followed (a/the) garden path. 

13. They like orange marmalade. 

14. There (are/were) branches everywhere. 

15. (A/the) kitchen sink (is/was) empty. 

16. The old gloves (are/were) dirty. 

17. The scissors (are/were) very sharp. 

18. (A/the) man cleaned his suede shoes. 

19. (A/the) raincoat (is/was) dripping wet. 

20. It's getting cold in here. 



HINT List 6 

1. (A/the) neighbor's boy (has/had) black hair. 

2. The rain came pouring down. (5 syllables) 

3. (An/the) orange (is/was) very sweet. 

4. He took the dogs for a walk. 

5. Children like strawberries. 

6. Her sister stayed for lunch. 

7. (A/the) train (is/was) moving fast. 

8. Mother shut (a/the) window. 

9. (A/the) bakery (is/was) open. 

10. Snow falls in the winter. 

11. (A/the) boy went to bed early. 

12. (A/the) woman cleaned her house. 

13. (A/the) sharp knife (is/was) dangerous. 

14. (A/the) child ripped open (a/the) bag. 

15. They had some cold cuts for lunch. 

16. She's helping her friend move. 

17. They ate (a/the) lemon pie. 

18. They (are/were) crossing (a/the) street. 

19. The sun melted the snow. 

20. (A/the) little girl (is/was) happy. 



HINT List 8 

1. (A/the) house (has/had) nine bedrooms. 

2. They're shopping for school clothes. 

3. They're playing in (a/the) park. 

4. Rain (is/was) good for the trees. 

5. They sat on (a/the) wooden bench. 

6. (A/the) child drank some fresh milk. 

7. (A/the) baby slept all night. 

8. (A/the) salt shaker (is/was) empty. 

9. (A/the) policeman knows the way. 

10. The buckets fill up quickly. 

1 1 . He played with his toy train. 

12. They're watching (a/the) cuckoo clock. 

13. Potatoes grow in the ground. 

14. (A/the) girl ran along (a/the) fence. 

15. (A/the) dog jumped on (a/the) chair. 

16. They finished dinner on time. 

17. He got mud on his shoes. 

18. They're clearing (a/the) table. 

19. Some animals sleep on straw. 

20. The police cleared (a/the) road. 



101 



HINT List 9 



HINT List 1 1 



1. Mother picked some flowers. 

2. (A/the) puppy played with (a/the) ball. 

3. (An/the) engine (is/was) running. 

4. (An/the) old woman (is/was) at home. 

5. They're watching (a/the) train go by. 

6. (An/the) oven (is/was) too hot. 

7. They rode their bicycles. 

8. (A/the) big fish got away. 

9. They laughed at his story. 

10. They walked across the grass. 

11. (A/the boy (is/was) running away. 

12. (A/the) towel (is/was) near (a/the) sink. 

13. Flowers can grow in (a/the) pot. 

14. He's skating with his friends. 

15. (A/the) janitor swept (a/the) floor. 

16. (A/the) lady washed (a/the) shirt. 

17. She took off her fur coat. 

18. The match boxes (are/were) empty. 

19. (A/the) man (is/was) painting (a/the) sign. 

20. (A/the) dog came home at last. 



1. They're running past (a/the) house. 

2. He's washing his face with soap. 

3. (A/the) dog's chasing (a/the) cat. 

4. (A/the) milkman drives (a/the) small truck. 

5. (A/the) bus leaves before (a/the) train. 

6. (A/the) baby (has/had) blue eyes. 

7. (A/the) bag fell off (a/the) shelf. 

8. They (are/were) coming for dinner. 

9. They wanted some potatoes. 

10. They knocked on (a/the) window. 

11. (A/the) girl came into (a/the) room. 

12. (A/the) field mouse found (a/the) cheese. 

13. They're buying some fresh bread. 

14. (A/the) machine (is/was) noisy. 

15. The rice pudding (is/was) ready. 

16. They had a wonderful day. 

17. (An/the) exit (is/was) well lit. 

18. (A/the) train stops at (a/the) station. 

19. He (is/was) sucking his thumb. 

20. (A/the) big boy kicked the ball. 



HINT List 10 



HINT List 12 



1. (A/the) painter uses (a/the) brush. 

2. (A/the) family bought (a/the) house. 

3. Swimmers can hold their breath. 

4. She cut (a/the) steak with her knife. 

5. They're pushing an old car. 

6. The food (is/was) expensive. 

7. The children (are/were) walking home. 

8. They (have/had) two empty bottles. 

9. Milk comes in (a/the) carton. 

10. (A/the) dog sleeps in (a/the) basket. 

11. (A/the) clown (has/had) (a/the) funny face. 

12. The bath water (is/was) warm. 

13. She injured four of her fingers. 

14. He paid his bill in full. 

15. They stared at (a/the) picture. 

16. (A/the) driver started (a/the) car. 

17. (A/the) truck carries fresh fruit. 

18. (A/the) botde (is/was) on (a/the) shelf. 

19. The small tomatoes (are/were) green. 

20. (A/the) dinner plate (is/was) hot. 



1 The paint dripped on the ground. 

2 (A/the) towel fell on (a/the) floor. 

3 (A/the) family likes fish. 

4 The bananas (are/were) too ripe. 

5 He grew lots of vegetables. 

6 She argues with her sister. 

7 (A/the) kitchen window (is/was) clean. 

8 He hung up his raincoat. (5 syllables) 

9 (A/the) mailman brought (a/the) letter. 

10 (A/the) mother heard (a/the) baby. 

1 1 (A/the) waiter brought (a/the) cream. 

12 (A/the) teapot (is/was) very hot. 

13 (An/the) apple pie (is/was) good. 

14 (A/the) jelly jar (is/was) full. 

15 (A/the) girl (is/was) washing her hair. 

16 (A/the) girl played with (a/the) baby. 

17 (A/the) cow (is/was) milked every day. 

18 They called an ambulance. 

19 They (are/were) drinking coffee. 

20 He climbed up (a/the) ladder. 



APPENDIX C 
SRT RAW DATA 



This appendix contains the speech reception threshold (SRT) raw data obtained 
from the experimental conditions for the symmetrical subject group and asymmetrical 
subject group. The presentation level (dB SPL) of the each sentence in each listening 
condition is depicted. Only presentation levels from sentences 5-21 were used for 
statistical purposes. The legend, as follows, denotes the subject identifier and the test 
condition's listening orientation relative to the speech and noise speakers. 



Legend 
Subject Identifier 









S = Symmetrical subject 
A = Asymmetrical subject 

Listening Orientation 
R = Speech at 45° and noise at 225° 
M - Speech at 0° and noise at 180° 
L = Speech at 315° and noise at 135° 



Subject SI 


Test Conditions 


Sentence # 


R 


M 


L 


1 


64 


72 


64 


2 


60 


68 


60 


3 


56 


64 


56 


4 


60 


68 


60 


5 


56 


72 


64 


6 


58 


74 


62 


7 


60 


72 


64 


8 


62 


70 


62 


9 


60 


68 


60 


10 


62 


70 


58 


11 


60 


68 


60 


12 


62 


70 


62 


13 


60 


72 


64 


14 


58 


70 


62 


15 


56 


72 


60 


16 


58 


70 


62 


17 


60 


68 


64 


18 


62 


66 


62 


19 


64 


64 


60 


20 


62 


66 


62 


21 


60 


68 60 




Subject S3 


Test Conditions 


Sentence # 


R 


M 


L 


1 


64 


68 


60 


2 


60 


64 


56 


3 


64 


68 


60 


4 


60 


64 


56 


5 


64 


60 


60 


6 


62 


62 


62 


7 


60 


64 


60 


8 


58 


66 


62 


9 


60 


64 


64 


10 


58 


62 


66 


11 


60 


64 


64 


12 


62 


62 


62 


13 


64 


64 


64 


14 


62 


66 


62 


15 


64 


64 


60 


16 


62 


66 


62 


17 


60 


64 


64 


18 


62 


66 


62 


19 


60 


68 


60 


20 


58 


66 


62 


21 60 


68 


60 



Subject S2 


Test Conditions 


Sentence # 


R 


M 


L 


1 


76 


76 


72 


2 


72 


72 


68 


3 


76 


68 


64 


4 


72 


72 


60 


5 


68 


76 


64 


6 


70 


74 


62 


7 


68 


72 


64 


8 


66 


70 


62 


9 


64 


68 


64 


10 


66 


70 


66 


11 


64 


72 


68 


12 


66 


70 


66 


13 


64 


72 


64 


14 


66 


70 


66 


15 


68 


72 


64 


16 


70 


70 


66 


17 


68 


68 


68 


18 


70 


70 


66 


19 


72 


68 


64 


20 


74 


70 


66 


21 72 


68 64 




Subject S4 


Test Conditions 


Sentence # 


R 


M 


L 


1 


64 


68 


60 


2 


60 


64 


56 


3 


64 


60 


60 


4 


68 


64 


64 


5 


64 


68 


68 


6 


62 


66 


66 


7 


60 


64 


64 


8 


62 


66 


62 


9 


64 


68 


60 


10 


62 


70 


62 


11 


60 


68 


64 


12 


62 


66 


62 


13 


64 


68 


64 


14 


62 


66 


62 


15 


64 


64 


60 


16 


62 


66 


62 


17 


60 


68 


64 


18 


62 


70 


62 


19 


64 


68 


60 


20 


62 


70 


62 


21 


64 


68 


60 



103 



104 





Subject S5 


Test Conditions 




Sentence # 


R 


M 


L 




1 


72 


72 


72 




2 


68 


68 


68 




3 


64 


64 


72 




4 


68 


68 


68 




5 


64 


72 


64 




6 


66 


70 


66 




7 


68 


72 


64 




8 


66 


70 


62 




9 


64 


68 


64 




10 


62 


66 


62 




11 


64 


68 


64 




12 


66 


70 


66 




13 


64 


68 


64 




14 


66 


66 


62 




15 


64 


68 


64 




16 


66 


70 


66 




17 


66 


68 


64 




18 


66 


70 


62 




19 


68 


72 


60 




20 


70 


74 


62 




21 


68 72 64 








Subject S7 


Test Conditions 




Sentence # 


R 


M 


L 




1 


72 


76 


68 




2 


68 


72 


64 




3 


72 


76 


68 




4 


68 


72 


72 




5 


72 


68 


76 




6 


70 


70 


74 




7 


68 


72 


72 




8 


70 


74 


70 




9 


72 


76 


68 




10 


70 


78 


70 




11 


72 


80 


68 




12 


74 


78 


66 




13 


72 


76 


68 




14 


70 


74 


66 




15 


68 


72 


68 




16 


70 


74 


70 




17 


72 


76 


68 




18 


70 


74 


70 




19 


68 


76 


68 


- 


20 


70 


78 


70 


21 72 


76 


68 





Subject S6 


Test Conditions 




Sentence # 


R 


M 


L 




1 


64 


72 


68 




2 


60 


68 


64 




3 


68 


72 


68 




4 


64 


68 


64 




5 


60 


72 


60 




6 


62 


70 


62 




7 


60 


68 


64 




8 


62 


66 


66 




9 


64 


68 


64 




10 


62 


66 


62 




11 


64 


68 


64 




12 


66 


66 


62 




13 


64 


68 


64 




14 


62 


66 


62 




15 


60 


68 


64 




16 


62 


70 


62 




17 


64 


72 


64 




18 


62 


70 


66 




19 


64 


72 


64 




20 


62 


70 


62 




21 


60 


72 


64 








Subject S8 


Test Conditions 




Sentence # 


R 


M 


L 




1 


64 


72 


68 




2 


60 


68 


64 




3 


64 


64 


60 




4 


60 


68 


64 




5 


64 


72 


60 




6 


66 


70 


62 




7 


68 


68 


64 




8 


66 


70 


62 




9 


64 


72 


64 




10 


66 


70 


62 




11 


68 


72 


64 




12 


66 


70 


66 




13 


68 


72 


64 




14 


66 


74 


62 




15 


68 


76 


64 




16 


66 


74 


66 




17 


68 


72 


68 




18 


66 


70 


66 




19 


68 


68 


64 




20 


70 


70 


62 


21 


66 


68 


64 



105 



Subject S9 


Test Conditions 


Sentence # 


R 


M 


L 


1 


72 


76 


72 


2 


68 


72 


68 


3 


64 


68 


64 


4 


68 


72 


68 


5 


64 


76 


72 


6 


66 


74 


70 


7 


64 


72 


72 


8 


66 


70 


70 


9 


64 


72 


72 


10 


62 


74 


70 


11 


64 


76 


68 


12 


62 


78 


66 


13 


64 


76 


68 


14 


66 


74 


66 


15 


64 


72 


68 


16 


66 


70 


70 


17 


68 


72 


68 


18 


66 


74 


66 


19 


68 


72 


68 


20 


66 


74 


70 


21 64 


76 


72 




Subject Sll 


Test Conditions 


Sentence # 


R 


M 


L 


1 


68 


76 


68 


2 


64 


72 


64 


3 


60 


68 


68 


4 


64 


72 


64 


5 


68 


68 


68 


6 


66 


70 


66 


7 


64 


72 


64 


8 


62 


70 


62 


9 


64 


68 


64 


10 


66 


70 


66 


11 


64 


68 


64 


12 


62 


66 


62 


13 


60 


68 


60 


14 


58 


70 


62 


15 


60 


68 


64 


16 


62 


66 


62 


17 


64 


68 


64 


18 


62 


70 


62 


19 


60 


72 


64 


20 


58 


70 


62 


21 


60 


68 60 



Subject S10 


Test Conditions 


Sentence # 


R 


M 


L 


1 


68 


76 


72 


2 


64 


72 


68 


3 


68 


68 


72 


4 


64 


72 


68 


5 


68 


76 


64 


6 


66 


74 


66 


7 


64 


72 


68 


8 


66 


70 


70 


9 


68 


68 


68 


10 


70 


70 


66 


11 


68 


72 


64 


12 


66 


74 


66 


13 


64 


72 


68 


14 


62 


70 


66 


15 


64 


72 


68 


16 


66 


74 


70 


17 


68 


72 


68 


18 


70 


70 


70 


19 


72 


68 


72 


20 


74 


70 


74 


21 


72 


72 


72 



Subject S12 


Test Conditions 


Sentence # 


R 


M 


L 


1 


64 


68 


64 


2 


60 


64 


60 


3 


64 


68 


64 


4 


68 


64 


68 


5 


64 


68 


72 


6 


66 


66 


70 


7 


68 


68 


68 


8 


66 


66 


66 


9 


68 


68 


64 


10 


66 


66 


66 


11 


64 


68 


64 


12 


66 


70 


62 


13 


68 


68 


64 


14 


70 


66 


62 


15 


68 


64 


64 


16 


66 


66 


62 


17 


68 


68 


64 


18 


66 


66 


66 


19 


68 


68 


64 


20 


66 


66 


66 


21 68 


68 


64 



106 



Subject S13 


Test Conditions 


Sentence # 


R 


M 


L 


1 


72 


72 


76 


2 


68 


68 


72 


3 


64 


72 


68 


4 


68 


76 


64 


5 


64 


80 


68 


6 


66 


78 


66 


7 


68 


76 


64 


8 


66 


74 


62 


9 


64 


72 


64 


10 


66 


70 


66 


11 


68 


72 


68 


12 


66 


74 


66 


13 


68 


76 


64 


14 


66 


74 


66 


15 


68 


72 


68 


16 


66 


74 


70 


17 


64 


72 


68 


18 


66 


70 


70 


19 


68 


72 


72 


20 


70 


70 


70 


21 


68 72 


68 



Subject S14 


Test Conditions 


Sentence # 


R 


M 


L 


1 


60 


68 


64 


2 


56 


64 


60 


3 


60 


68 


64 


4 


64 


64 


68 


5 


68 


68 


64 


6 


66 


70 


66 


7 


64 


68 


64 


8 


62 


66 


66 


9 


64 


64 


64 


10 


66 


66 


62 


11 


64 


68 


64 


12 


62 


66 


62 


13 


64 


64 


60 


14 


62 


66 


62 


15 


64 


68 


64 


16 


62 


70 


62 


17 


60 


68 


64 


18 


62 


70 


62 


19 


60 


68 


64 


20 


58 


70 


66 


21 


60 


68 


64 






Subject S15 


Test Conditions 


Sentence # 


R 


M 


L 


1 


68 


68 


68 


2 


64 


64 


64 


3 


60 


60 


68 


4 


64 


64 


64 


5 


60 


68 


68 


6 


62 


70 


66 


7 


64 


68 


64 


8 


62 


70 


66 


9 


64 


72 


64 


10 


62 


70 


62 


11 


64 


68 


60 


12 


66 


70 


62 


13 


64 


72 


64 


14 


62 


70 


62 


15 


64 


72 


60 


16 


66 


70 


62 


17 


64 


68 


64 


18 


62 


66 


62 


19 


64 


68 


64 


20 


66 


70 


66 


21 


64 


68 


64 



Subject S16 


Test Conditions 


Sentence # 


R 


M 


L 


1 


64 


68 


72 


2 


60 


64 


68 


3 


56 


60 


64 


4 


60 


64 


68 


5 


64 


68 


64 


6 


66 


70 


62 


7 


68 


72 


60 


8 


70 


70 


62 


9 


68 


68 


64 


10 


66 


66 


66 


11 


64 


64 


64 


12 


62 


66 


62 


13 


60 


68 


60 


14 


58 


70 


62 


15 


60 


68 


60 


16 


58 


66 


62 


17 


60 


64 


60 


18 


62 


62 


62 


19 


60 


64 


64 


20 


62 


62 


66 


21 


64 


64 


68 



107 





Subject Al 


Test Conditions 




Sentence # 


R 


M 


L 




1 


72 


68 


72 




2 


68 


64 


68 




3 


64 


68 


64 




4 


68 


72 


60 




5 


72 


68 


64 




6 


74 


66 


66 




7 


72 


68 


64 




8 


70 


70 


66 




9 


68 


72 


64 




10 


70 


70 


62 




11 


68 


72 


60 




12 


66 


74 


62 




13 


68 


72 


64 




14 


70 


70 


66 




15 


68 


72 


64 




16 


70 


74 


66 




17 


68 


72 


64 




18 


70 


70 


66 




19 


72 


72 


68 




20 


70 


70 


66 




21 


72 


72 


64 






Subject A3 


Test Conditions 




Sentence # 


R 


M 


L 




1 


72 


76 


76 




2 


68 


72 


72 




3 


72 


68 


68 




4 


68 


72 


72 




5 


64 


76 


76 




6 


66 


74 


74 




7 


64 


72 


72 




8 


62 


70 


70 




9 


64 


72 


68 




10 


62 


70 


66 




11 


64 


72 


68 




12 


66 


74 


70 




13 


68 


76 


68 




14 


66 


74 


70 




15 


64 


76 


68 




16 


66 


74 


70 




17 


68 


72 


72 




18 


70 


74 


70 




19 


68 


72 


72 




20 


66 


74 


70 




21 68 


76 


72 



Subject A2 


Test Conditions 


Sentence # 


R 


M 


L 


1 


72 


72 


80 


2 


68 


68 


76 


3 


64 


64 


72 


4 


68 


68 


76 


5 


64 


64 


72 


6 


66 


66 


70 


7 


68 


68 


68 


8 


66 


70 


70 


9 


68 


68 


72 


10 


70 


70 


70 


11 


68 


72 


72 


12 


70 


70 


70 


13 


68 


72 


72 


14 


66 


70 


70 


15 


64 


72 


68 


16 


66 


70 


70 


17 


64 


72 


68 


18 


62 


70 


70 


19 


64 


68 


72 


20 


66 


70 


74 


21 


64 


72 


72 



Subject A4 


Test Conditions 


Sentence # 


R 


M 


L 


1 


68 


76 


80 


2 


64 


72 


76 


3 


60 


68 


72 


4 


64 


72 


76 


5 


68 


68 


72 


6 


66 


70 


74 


7 


64 


68 


72 


8 


62 


70 


70 


9 


64 


72 


68 


10 


62 


70 


70 


11 


64 


72 


72 


12 


66 


74 


70 


13 


64 


72 


68 


14 


66 


70 


70 


15 


64 


68 


72 


16 


66 


70 


70 


17 


68 


72 


72 


18 


66 


70 


74 


19 


64 


72 


76 


20 


66 


70 


78 


21 


68 


68 


76 



108 



Subject A5 


Test Conditions 


Sentence # 


R 


M 


L 


1 


64 


72 


68 


2 


60 


68 


64 


3 


64 


72 


68 


4 


68 


68 


72 


5 


64 


64 


68 


6 


62 


66 


66 


7 


64 


68 


64 


8 


66 


66 


66 


9 


64 


68 


68 


10 


62 


70 


70 


11 


64 


72 


68 


12 


66 


70 


70 


13 


64 


68 


68 


14 


62 


70 


66 


15 


64 


68 


68 


16 


66 


70 


70 


17 


64 


68 


68 


18 


62 


70 


70 


19 


64 


72 


72 


20 


62 


70 


70 


21 


60 


72 


72 




Subject A7 


Test Conditions 


Sentence # 


R 


M 


L 


1 


76 


80 


68 


2 


72 


76 


64 


3 


76 


72 


68 


4 


80 


76 


64 


5 


76 


72 


68 


6 


74 


70 


66 


7 


72 


68 


68 


8 


70 


66 


66 


9 


72 


68 


68 


10 


74 


70 


66 


11 


72 


68 


68 


12 


74 


70 


70 


13 


76 


72 


68 


14 


78 


70 


70 


15 


80 


72 


68 


16 


78 


70 


66 


17 


76 


72 


68 


18 


74 


70 


66 


19 


76 


72 


64 


20 


74 


74 


66 


21 | 76 


72 


68 





Subject A6 


Test Conditions 




Sentence # 


R 


M 


L 




1 


76 


72 


64 




2 


72 


68 


60 




3 


68 


72 


64 




4 


64 


68 


60 




5 


60 


72 


64 




6 


62 


70 


62 




7 


64 


72 


64 




8 


62 


70 


62 




9 


64 


68 


64 




10 


62 


70 


66 




11 


60 


72 


64 




12 


62 


70 


66 




13 


64 


68 


64 




14 


62 


70 


62 




15 


64 


72 


64 




16 


66 


70 


62 




17 


64 


68 


60 




18 


66 


70 


62 




19 


68 


68 


64 




20 


66 


70 


62 




21 


64 


68 


64 








Subject A8 


Test Conditions 




Sentence # 


R 


M 


L 




1 


64 


72 


76 




2 


60 


68 


72 




3 


64 


64 


68 




4 


60 


68 


64 




5 


64 


72 


68 




6 


66 


70 


66 




7 


64 


72 


64 




8 


66 


70 


66 




9 


68 


68 


68 




10 


66 


66 


66 




11 


64 


68 


64 




12 


62 


66 


66 




13 


64 


68 


64 




14 


62 


70 


66 




15 


60 


72 


68 




16 


62 


70 


70 




17 


64 


68 


68 




18 


66 


66 


66 




19 


64 


64 


68 




20 


62 


66 


66 




21 


64 


68 


64 



109 



Subject A9 


Test Conditions 


Sentence # 


R 


M 


L 


1 


68 


72 


72 


2 


64 


68 


68 


3 


60 


72 


72 


4 


64 


68 


68 


5 


68 


72 


64 


6 


66 


70 


66 


7 


64 


72 


64 


8 


62 


70 


66 


9 


64 


72 


64 


10 


62 


70 


66 


11 


60 


68 


68 


12 


62 


70 


70 


13 


64 


68 


72 


14 


66 


66 


70 


15 


68 


68 


72 


16 


66 


70 


70 


17 


64 


68 


68 


18 


66 


66 


66 


19 


64 


68 


68 


20 


62 


70 


66 


21 64 68 


68 



Subject All 


Test Conditions 


Sentence # 


R 


M 


L 


1 


76 


72 


72 


2 


72 


68 


68 


3 


68 


64 


72 


4 


72 


68 


68 


5 


76 


72 


64 


6 


78 


70 


66 


7 


76 


72 


64 


8 


74 


70 


62 


9 


76 


72 


60 


10 


74 


70 


62 


11 


72 


68 


64 


12 


70 


66 


62 


13 


72 


68 


64 


14 


70 


70 


66 


15 


68 


68 


68 


16 


70 


70 


66 


17 


72 


68 


64 


18 


74 


70 


66 


19 


76 


72 


68 


20 


74 


70 


70 


21 


72 


72 


72 



Subject A10 


Test Conditions 


Sentence # 


R 


M 


L 


1 


68 


76 


76 


2 


64 


72 


72 


3 


60 


76 


68 


4 


64 


72 


72 


5 


68 


76 


76 


6 


66 


74 


74 


7 


68 


72 


72 


8 


70 


70 


70 


9 


68 


72 


68 


10 


66 


70 


70 


11 


68 


72 


72 


12 


66 


74 


74 


13 


68 


76 


72 


14 


66 


74 


70 


15 


64 


72 


68 


16 


66 


74 


70 


17 


68 


76 


72 


18 


66 


74 


70 


19 


68 


72 


68 


20 


70 


70 


70 


21 


68 


72 


68 




Subject A12 


Test Conditions 


Sentence # 


R 


M 


L 


1 


76 


80 


68 


2 


72 


76 


64 


3 


68 


72 


60 


4 


64 


76 


64 


5 


68 


80 


68 


6 


66 


78 


70 


7 


64 


76 


68 


8 


66 


74 


66 


9 


68 


76 


68 


10 


70 


74 


66 


11 


72 


72 


64 


12 


70 


70 


66 


13 


68 


72 


64 


14 


70 


70 


62 


15 


72 


72 


60 


16 


70 


70 


62 


17 


68 


72 


64 


18 


66 


74 


66 


19 


68 


76 


68 


20 


70 


74 


66 


21 68 


76 


68 



110 



Subject A13 


Test Conditions 


Sentence # 


R 


M 


L 


1 


72 


76 


68 


2 


68 


72 


64 


3 


72 


68 


60 


4 


68 


64 


64 


5 


64 


68 


60 


6 


62 


70 


62 


7 


64 


68 


60 


8 


66 


70 


62 


9 


68 


68 


64 


10 


70 


70 


62 


11 


68 


72 


64 


12 


66 


74 


66 


13 


64 


72 


64 


14 


66 


74 


66 


15 


68 


72 


64 


16 


70 


74 


66 


17 


68 


72 


64 


18 


66 


70 


62 


19 


68 


72 


64 


20 


70 


70 


66 


21 


72 


68 


64 




Subject A 15 


Test Conditions 


Sentence # 


R 


M 


L 


1 


68 


72 


68 


2 


64 


68 


64 


3 


68 


72 


68 


4 


72 


68 


72 


5 


68 


72 


68 


6 


66 


70 


70 


7 


68 


72 


72 


8 


66 


70 


70 


9 


64 


68 


68 


10 


66 


70 


66 


11 


64 


72 


68 


12 


62 


70 


70 


13 


64 


72 


68 


14 


66 


74 


66 


15 


64 


72 


68 


16 


66 


70 


66 


17 


68 


68 


64 


18 


66 


70 


62 


19 


68 


72 


64 


20 


66 


74 


66 


21 


68 


72 


64 



Subject A14 


Test Conditions 


Sentence # 


R 


M 


L 


1 


68 


76 


76 


2 


64 


72 


72 


3 


68 


76 


68 


4 


64 


80 


72 


5 


68 


76 


76 


6 


66 


78 


74 


7 


64 


80 


72 


8 


66 


78 


70 


9 


68 


80 


68 


10 


66 


78 


70 


11 


64 


80 


68 


12 


62 


78 


66 


13 


64 


80 


68 


14 


66 


78 


70 


15 


68 


76 


72 


16 


66 


78 


70 


17 


68 


76 


72 


18 


66 


74 


70 


19 


68 


76 


68 


20 


66 


74 


70 


21 


68 


72 


72 




Subject A16 


Test Conditions 


Sentence # 


R 


M 


L 


1 


72 


76 


76 


2 


68 


72 


72 


3 


64 


76 


68 


4 


60 


72 


64 


5 


64 


76 


68 


6 


62 


74 


70 


7 


64 


72 


68 


8 


66 


74 


70 


9 


68 


76 


68 


10 


70 


74 


70 


11 


68 


72 


68 


12 


66 


70 


70 


13 


64 


72 


68 


14 


66 


70 


70 


15 


68 


72 


68 


16 


70 


70 


70 


17 


68 


72 


68 


18 


70 


70 


70 


19 


72 


72 


68 


20 


70 


74 


66 


21 


68 


72 


68 



LIST OF REFERENCES 

Agnew, J. (2000). Optimizing binaural cues for speech understanding. The Hearing 
Review . 7(5) . 20-52. 

American National Standards Institute (1991). American national standard criteria for 
permissible ambient noise during audiometric testing. (ANSI S3. 1-1991) New York- 
ANSI. 

Bench, J., & Bamford, J. (Eds.) (1979). Speech-hearing tests and the spoken language of 
hearing-impaired children. London: Academic. 

Bergman, M. (1957). Binaural hearing. Archives of Otolaryngology . 66, 572-578. 

Bess, F. H., & Tharpe, A. M. (1986). An introduction to unilateral sensorineural hearing 
loss in children. Ear and Hearing . 7(1) . 2-13. 

Bess, F. H., Tharpe, A. M., & Gibler, A. M. (1986). Auditory performance of children 
with unilateral sensorineural hearing loss. Ear and Hearing . 7, 20-26. 

Bocca, E. (1955). Binaural hearing: another approach. Laryngoscope, 65, 1164-1171. 

Boney, S. (1987). Binaural advantage of high- and low-predictability sentences in noise 
and reverberation. Unpublished dissertation. Vanderbilt University, Nashville: Tennessee. 

Breakey, M. R., & Davis, H. (1949). Comparisons of thresholds for speech: word and 
sentence test; receiver vs field, and monaural vs binaural listening. Laryngoscope 59 
236-250. 

Bronkhorst, A.W., & Plomp, R. (1988). The effect of head-induced interaural time and 
level differences on speech intelligibility in noise. Journal of the Acoustical Society of 
America . 83(41 . 1508-1516. 

Bronkhorst, A.W., & Plomp, R. (1989). Binaural speech intelligibility in noise for 
hearing-impaired listeners. Journal of the Acoustical Society of America 86(4) 1374- 
1383. ^^ 

Byrne, D. (1980). Binaural hearing aid fitting: research findings and clinical application. 
In: Libby, E. R. (Ed), Binaural Hea ring and Amplification. Vol, II Chicago Zenetron 
Inc. 23-74. 



Ill 



112 

Carhart, R. (1965a). Monaural and binaural discrimination against competing sentences. 
International Audiology . 4, 5-10. 

Carhart, R. (1965b). The background squelch effect: a phenomenon of binaural listening. 
American Speech and Hearing Association , 7(10) 404-405. 

Carhart, R. (1967). Binaural reception of meaningful material. In: Graham, A. B. (Ed.), 
Sensorineural Hearing Processes and Disorders . Boston: Little, Brown and Co. 153-168. 

Carhart, R , Pollock, K., & Lotterman, S. H. (1963). Comparative efficiency of Aided 
Binaural Hearing. American Speech and Hearing Association . 5, 779. 

Carhart, R, Tillman, T. W., & Johnson, K. R., (1967). Release of masking for speech 
through interaural time delay. Journal of the Acoustical Society of America . 42 124-138. 

Cherry, E. C. (1953). Some experiments on the recognition of speech, with one and with 
two ears. Journal of the Acoustical Society of America . 25(5) . 975-979. 

Crandell, C. C. (1991). Individual differences in speech recognition ability: Implications 
for hearing aid selection. Ear and Hearing . 12, (Suppl), 100-107. 

Crandell, C. C, & Boney, S. (2000). Adaptive speech-recognition procedures in the 
clinical setting. In press: Journal of the American Academy of Audiology . 

Crandell, C. C, & Needleman, A. R. (1999). Modeling hearing loss via maksing: 
implications for hearing aid selection. The Hearing Journal . 52, 58-62. 

Crandell, C. C, & Smaldino, J. J. (2000). Assistive technologies for the hearing 
impaired. In: Sandlin, R. E. (Ed.), Textbook of Hearing Aid Amplification (2 nd edition! 
San Diego: Singular, 643-672. 

DeCroix, G., & Dehaussey, J. (1964). Binaural hearing and intelligibility. Journal of 
Auditory Research . 4. 115-134. 

Dermody, P., & Byrne. D. (1975). Loudness summation with binaural hearing aids. 
Scandinavian Audiology . 4, 23-28. 

Dirks, D., Morgan, D , & Dubno, J. (1982). A procedure for quantifying the effects of 
noise on speech recognition. Journal of Speech and Hearing Disorders . 47, 1 14-123. 

Duquesnoy, A. (1983). The intelligibility of sentences in quiet and in noise in aged 
listeners. Journal of the Acoustical Society of America . 74. 1136-1144. 

Durlach, N. I., Thompson, C. L., & Colburn, H. S. (1981). Binaural interaction in 
impaired listeners. Audiology . 20. 181-211 

Feuerstein, J. F. (1992). Monaural versus binaural hearing: ease of listening, word 
recognition, and attentional effort. Ear and Hearing . 13(2) . 80-86. 






113 

Fletcher, H. F., & Munson, W. A. (1933). Loudness, its definition, measurement and 
calculation. Journal of the Acoustical Society of America 5, 82-108. 

Garinther, G. R., & Peters, L. J. (1990). Impact of communications on armor crew 
performance. Army Research. Development & Acquisition Bulletin . January-Febr uary 
1990 . 1-5. 

Gelfand, S. A, & Hochberg, I. (1976). Binaural and monaural speech discrimination 
under reverberation. Audiology . 15, 72-84. 

Groen, J. J., & Hellema A. C. M. (1960). Binaural speech audiometry. Acta 
Otolaryngolog y 52, 397-414. 

Halverson, H. M. (1927). The upper limit of auditory localization. American Journal of 
Psychology, 38, 97-106. 

Harris, J. D. (1965). Monaural and binaural speech intelligibility and the stereophonic 
effect on temporal cues. Laryngoscope . 75, 428-446. 

Hawkins, D. B. (1986). Selection of hearing aid characteristics. In: Hodgson, W. R. 
( Ed )> Hearing Aid Assessment and Use in Audiologic Habilitation. 3 rd edition ' Baltimore 
Williams & Wilkins. 128-151. 

Hawkins, D., & Yacullo, W. (1984). Signal-to-noise advantage of binaural hearing aids 
and directional microphones under different levels of reverberation. Journal of Sp eech and 
Hearing Disorders , 49, 278-285. 

Henning, G. B. (1974). Detectability of interaural delay in high-frequency complex 
waveforms. Journal of the Acoustical Society of America 44, 84-90. 

Hirsh, I. J. (1948a). Binaural summation and interaural inhibition as a function of the level 
of masking noise. American Journal of Psychology . 61, 205-213. 

Hirsh, I. J. (1948b). The influence of interaural phase on interaural summation and 
inhibition. Journal of the Acoustical Society of America 20, 536-544. 

Hirsh, I. J. (1950b). The relation between localization and intelligibility. Journal of the 
Acoustical Society of America 22(21 196-200. 

Hirsh, I. J., & Pollack I. (1948). The role of interaural phase in loudness. Journal of the 
Acoustical Society of America , 29, 761-766. 

Humes, L. E., Allen, S. K., & Bess, F. H. (1983). Horizontal sound localization skills of 
unilaterally hearing-impaired children. Audiology . J_9, 508-518. 



114 

Humes, L. E., & Roberts, L. (1990). Speech-recognition difficulties of the hearing- 
impaired elderly: the contributions of audibility. Journal of Speech and Hearing Research . 
33, 726-735. 

Jerger, J., & Hayes, D. (1976). Hearing aid evaluations. Archives of Otolaryngology . 
102 . 214-225. 

Jerger, J., Jerger, S., & Pirozzolo, F. (1991). Correlational analysis of speech audiometric 
scores, hearing loss, age, and cognitive abilities in the elderly. Ear and Hearing . 12(2) 
103-109. 

Kaiser, J. F, & David, E. E. (1960). Reproducing the cocktail party effect. Journal of the 
Acoustical Society of America . 32, 918. 

Kalikow, D., Stevens, K., & Elliot, L. (1977). Development of a test of speech 
intelligibility in noise using sentence materials with controlled word predictability. Journal 
of the Acoustical Society of America . 61, 

Keys, J. W. (1947). Binaural versus monaural hearing. Journal of the Acoustical Society 
of America . 19(41 629-631. 

Klein, W. (1971). Articulation loss of consonants as a criterion of speech transmission in 
a room. Journal of the Audio Engineerin g Society 19, 920-922. 

Killion, M. C. (1997). SNR loss: "I can hear what people say but I can't understand 
them." The Hearing Review . 4(12) . 8-14. 

Koenig, W. (1950). Subjective effects in binaural hearing. Journal of the Acoustical 
Society of America . 22(F) . 61-62. 

Kollmeier, B., & Peissig, J. (1990). Speech intelligibility enhancement by interaural 
magnification. Acta Otolaryngol (Stockh) . 469S, 215-223. 

Konkle, D. F., & Schwartz, D. M. (1981). Binaural amplification: a paradox. In: Bess, F. 
A., Freeman, B. A., & Sinclair, S. (Eds.). Amplification in education. Washington, DC: 
Alexander Graham Bell Association for the Deaf. 

Kupyer, P. (1972). The cocktail party effect. Audiologv . U, 277-282. 

Levitt, H. (1982). Speech discrimination ability in the hearing impaired: spectrum 
considerations. Vanderbilt Hearing Aid Report. In: Monographs of Contemporary 
Audiology. Upper Darby, PA. 32-43. 

Levitt, H, & Rabiner, L. R. (1967a). Binaural release from masking for speech and gain 
in intelligibility. Journal of the Acoustical Society of America 42, 601-608. 



115 

Levitt, H., & Rabiner, L. R. (1967b). Predicting binaural gain in intelligibility and release 
from masking for speech. Journal of the Acoustical Society of America 42, 820-829. 

Libby, E. R, (1980). In search of the two eared man. In: Libby, E. R. (Ed.), Binaural 
Hearing and Amplification. Vol. I . Chicago: Zenetron, 1-36. 

Licklider, J. (1948). The influence of interaural phase relations upon the masking of 
speech by white noise. Journal of the Acoustical Society of America . 20, 150-159. 

Lochner, J. P. A., & Burger, J. F. (1961). The binaural summation of speech signals 
Acustica . 11(5) . 313-317. 

MacKeith, N. W, & Coles, R. R. A. (1971). Binaural advantages in hearing of speech. 
Journal of Laryngology and Otology . 85, 213-232. 

Markides, A. (1977). Binaural hearing aids . New York: Academic Press, 1977. 

Miller, G. A., & Nicely, P. E. (1955). An analysis of perceptual confusions among some 
English consonants. Journal of the Acoustical Society of America 27, 338-352. 

Mills, A. W. (1958). On the minimum audible angle. Journal of the Acoustical Society of 
America . 30, 237-246. 

Moncur, J. P., & Dirks, D. D. (1967). Binaural and monaural speech intelligibility in 
reverberation. Journal of Speech and Hearing Research 1 0(7) 186-195. 

Moore, B. C. J. (1996). Perceptual consequences of cochlear hearing loss and their 
implications for the design of hearing aids. Ear and Hearing 17(2) . 133-161. 

Moore, B. C. J. (1997). An Introduction to the Psychology of Hearing (4th Edition! San 
Diego: Academic Press. 

Moore, B. C. J. (1998). Cochlear Hearing Loss London: Whurr Publishers Ltd. 

Mueller, H. G., & Hawkins, D. B. (1995). Three important considerations in hearing aid 
selection. In: Sandlin, R. E. (Ed), Handbook of Hearing Aid Amplification. Vol TT San 
Diego: Singular Publishing Group, Inc. 31-60. 

Nabelek, A. K. (1993). Communication in noisy and reverberant environments. In: 
Studebaker, G. A., & Hochberg, I. (Eds), Acoustical Factors Affecting Hearing Aid 
Performance (2nd Edition). Boston: Allyn and Bacon. 15-28. 

Nabelek, A. K., & Mason, D. (1981). Effect of noise and reverberation on binaural and 
monaural word identification by subjects with various audiograms. Journal of Sp eech and 
Hearing Research 24(3 V 375-383. 



116 

Nabelek, A. K., & Pickett, J. M. (1974). Monaural and binaural speech perception 
through hearing aids under noise and reverberation with normal and hearing-impaired 
listeners. Journal of Speech and Hearing Research . 17, 724-739, 

Nabelek, A. K., & Pickett, J. M. (1974b). Reception of consonants in a classroom as 
affected by monaural and binaural listening, noise, reverberation, and hearing aids. Journal 
of the Acoustical Society of America . 56(2) . 628-639. 

Naidoo, S. V., & Hawkins, D. B. (1997). Monaural/binaural preferences: effect of 
hearing aid circuit on speech intelligibility and sound quality. Journal of the American 
Academy of Audiolgoy . 8, 188-202. 

Needleman, A. R., & Crandell, C. C. (1992). Speech recognition in noise by listeners with 
simulated hearing loss. Paper presented at the annual meeting of the American Auditory 
Society, San Antonio, TX. 

Needleman, A. R., & Crandell, C. C. (1993a). Speech recognition in individual with 
simulated sensorineural hearing loss. Paper presented at the University of Texas 
Southwestern Allied Health Research Forum, Dallas, TX. 

Needleman, A. R., & Crandell, C. C. (1993b). Speech perception by listeners with 
simulated sensorineural hearing loss. Paper presented at the annual meeting of the 
American Academy of Audiolgy, Phoenix, AZ. 

Nilsson, M, Felker, D., Sumida, A., & Senne, A. (1992). The influence of hearing loss on 
speech understanding in noise. Paper presented at the annual conference of the House Ear 
Institute on Issues in Advanced Hearing Aid Research, Lake Arrowhead, California. 

Nilsson, M., Soli, D., & Sullivan, J. (1994). Development of the Hearing in Noise Test 
for the measurement of speech reception thresholds in quiet and in noise. Journal of the 
Acoustical Society of America , 95, 1085-1099. 

Nordlund, B. (1964). Directional audiometry. Acta Otolaryng ol 57 1-18 

Nordlund, B , & Fritzell, B. (1963). The influence of azimuths on speech signal. Acta 
Otolaryngology (Stockholm) 56, 132-142. 

Olsen, W. O. (1965). The effect of head movement on speech perception under monaural 
and binaural listening conditions. American Speech and Hearing Association 7(10) . 405. 

Olsen, W. O., & Carhart, R. (1967). Development of test procedures for evaluation of 
binaural hearing aids. Bulletin of Prosthetic Research J_0, 22-49. 

Olsen, W. O., & Noffsinger, D. (1974). Comparison of one new and three old tests of 
auditory adaptation. Archives of Otolaryng ology 99, 94-99. 



117 

Ostler, D. A., & Crandell, C. C. (1999). Adaptive speech recognition procedures in the 
clinic setting . Paper presented at the annual convention of the American Academy of 
Audiology, Miami Beach, Florida. 

Peutz, V. (1971). Articulation loss of consonants as a criterion for speech transmission in 
a room. Journal of the Audio Engineering Society . 19. 915-919. 

Pickles, J. O. (1988). An Introduction to the Physiology of Hearing (2 nd edition) . London: 
Academic Press Limited. 

Plomp, R., (1978). Auditory handicap of hearing impairment and the limited benefit of 
hearing aids. Journal of the Acoustical Society of America . 63(2) . 533-549. 

Plomp, R. (1986). A signal-to-noise ratio model for the speech-reception threshold of the 
hearing impaired. Journal of Speech and Hearing Research . 29. 146-154. 

Plomp, R., & Mimpen, A. (1979). Improving the reliability of testing the speech reception 
threshold for sentences. Audiology . 18, 43-52. 

Pollack, I. (1948). Monaural and binaural threshold sensitivity for tones and for white 
noise. Journal of the Acoustical Society of America . 20(1) . 52-57. 

Pollack, I., & Pickett, J. M. (1958). Stereophonic listening and speech intelligibility 
against voice babble. Journal of the Acoustical Society of America 30(2) . 13 1-133. 

Prosser, S., Turrini, M., & Arslan, E. (1991). Effects of different noises on speech 
discrimination by the elderly. Acta Oto-Larvngol. (Stockholm) s476 . 136-142. 

Punch, J. L., Jenison, R. L., Allan, J., & Durrani, J. D. (1991). Evaluation of three 
strategies for fitting hearing aids binaurally. Ear and Hearing . 12(3) . 205-215. 

Reynolds, G.,& Stevens, S.( 1960). Binaural summation of loudness. Journal of the 
Acoustical Society of America 32, 1337-1344. 

Sandel, T. T., Teas, D. C , Feddersen, W. E., & Jeffress, L. (1955). Localization of sound 
from single paired sources. Journal of the Acoustical Society of America 27, 842-852. 

Scharf, B. (1968). Binaural loudness summation as a function of bandwidth. Reports of 
the International Congress on Acoustics . 

Schubert, E. D., & Schultz, M. C. (1962). Some aspects of binaural signal selection. 
Journal of the Acoustical Society of America . 34, 844-849. 

Shaw, W. A, Newman, E. B., & Hirsh, I. J. (1947). The difference between monaural 
and binaural thresholds. Journal of Experimental Psychology . 37, 229-242. 

Silman, S, & Silverman, C. A. (1991). Acoustic-Immittance Assessment. Auditory 
Diagnosis: Principles a nd Applications San Diego: Academic Press, Inc. 71-136. 



118 

Sivian, L. J., & White, S. D. (1933) On minimum audible sound fields. Journal of the 
Acoustical Society of America 4, 288-321. 

Soli, S. D., & Nilsson, M. (1994). Assessment of communication handicap with the 
HINT. Hearing Instruments 45(2), 12-16. 

Stevens, S. S., & Newman, E. B. (1936). The localization of actual source of sound. 
American Journal of Psychology 48, 297-306. 

Stream, R. W., & Dirks, D. D. (1974). Effect of loudspeaker position on differences 
between earphone and free-field thresholds (MAP and MAF). Journal of Sp eech and 
Hearing Research . 17, 549-567. 

Studebaker, G. A. (1991). Measures of intelligibility and quality. In: Studebaker, G. A., 
Bess, E., & Beck, L., (Eds), The Vanderbilt hearing -ai d report 1 1 . Parkton MD York 
Press. 185-195. 

Tillman, T. W., Kasten, R. N, & Horner, J. S. (1963). Effect of head shadow on 
reception of speech. American Speech and Hearing Association 5(10) . 778-779. 

Van Tasell, D., & Yanz, J. (1987). Speech recognition threshold in noise: effects of 
hearing loss, frequency response, and speech materials. Journal of Speech and Hearing 
Research . 30, 377-386. 

Viehweg, R, & Campbell, R. (1960). Localization difficulty in monaurally impaired 
listeners. Tran Am Otol Soc 48, 339-350. 

Walden, B. E. (1984). Speech perception of the hearing-impaired. In: Jerger, J, (Ed ) 
Hearing Disorders in Adults San Diego: College Hill Press. 263-309. 

Walden, B. E., Prosek, R. A., & Worthington, D. W. (1975). Auditory and audiovisual 
feature transmission in hearing-impaired adults. Journal of Speech and Hearing Research 
18 . 272-280. 

Yost, W. A., & Nielsen, D. W. (1985). Fundamentals of Hearing: An Introduction (2 nd 
Edition). New York: Holt, Rinehart and Winston. 

Zurek, P. M. (1993). Binaural advantages and directional effects in speech intelligibility 
In: Studebaker, G. A., & Hochberg, I. (Eds), Acoustic Facto rs Affecting Hearing AiH 
Performance (2nd Edition). Boston: Allyn and Bacon. 255-281. 



BIOGRAPHICAL SKETCH 
Dale A Ostler was born on 1 May 1960 in Moses Lake, Washington, USA. The 
fourth of seven children, he graduated from high school in 1978 and entered college at 
Brigham Young University (B YU) in Provo, Utah. He interrupted his studies at BYU to 
serve a 2-year mission for his church, The Church of Jesus Christ of Latter-day Saints, in 
Argentina. On returning from his mission, he re-enrolled at BYU and completed his B.S. 
in Communication Sciences and Disorders in 1986 and his M.A. in Audiology in 1988. 
After obtaining his M.A. he accepted a direct commission in the United States 
Army as an Audiologist and took his first assignment at Walter Reed Army Medical 
Center (WRAMC) in Washington, DC. While stationed at WRAMC, he served as a team 
member on the first ever Audiology deployment to a theater of war in support of 
Operation Desert Storm. Subsequent Army assignments took him to Tripler Army 
Medical Center, in Hawaii and to the Army's Medical Department Center and School, in 
San Antonio, TX where he was chief instructor in the Audiology Technician's course. 
Dale currently holds the rank of Major in the Army. On completion of his doctoral studies 
at the University of Florida he will assume an assignment at the United States Army 
Aeoromedical Research Laboratory, Ft. Rucker, Alabama, as a Audiology researcher. 

Dale is married to the former Julie Brockman. They have three children: 
Benjamin, age 7; Kara, age 5; and Nathan, age 15 months. 



119 



I certify that I have read this study and that in my opinion it conforms to 
acceptable standards of scholarly presentation and is fully adequate, in scope and quality, 
as a dissertation for the degree of Doctor of Philosophy. 




Associate Professor of Communication 
Sciences and Disorders 



I certify that I have read this study and that in my opinion it conforms to 
acceptable standards of scholarly presentation and is fully adequate, in scope and quality, 
as a dissertation for the degree of Doctor of Philosophy. 



rhirAt V / 



Kenneth 

Professor of Communication Sciences 
and Disorders 

I certify that I have read this study and that in my opinion it conforms to 
acceptable standards of scholarly presentation and is fully adequate, in scope and quality 
as a dissertation for the degree of Doctor of Philosophy. 

K 




Scott K. Griffith 
Associate Professor of Communication 
Sciences and Disorders 



I certify that I have read this study and that in my opinion it conforms to 
acceptable standards of scholarly presentation and is fully adequate, in scope and quality 
as a dissertation for the degree of Doctor of Philosophy. 




Alan Hutson 

Research Associate Professor of 

Statistics, Division of Biostatistics 

I certify that I have read this study and that in my opinion it conforms to 
acceptable standards of scholarly presentation and is fully adequate, in scope and quality 
as a dissertation for the degree of Doctor of Philosophy. 

Nancy L. Vause 

Colonel, United States Army 



This dissertation was submitted to the Graduate Faculty of the Department of 
Communication Sciences and Disorders in the College of Liberal Arts and Sciences and 
to the Graduate School and was accepted as partial fulfillment of the requirements for the 
degree of Doctor of Philosophy. 

August 2000 



Dean, Graduate School 



mo 

KX£> 



mwiuFRSITY OF FLORIDA 

lBBI|U 

3 1262 08555 3385